Past Speakers

There are 98 speakers recorded in our database.

Connie Trojan

1 talk
  • 2024-11-28: Diffusion Generative Modelling for Divide-and-Conquer MCMC

Rui Zhang

2 talks
  • 2025-10-30: A Dynamic Perspective of Matern Gaussian Processes

    The ubiquitous Gaussian process (GP) models in statistics and machine learning (Williams and Rasmussen; 2006) are static by default, either using the weight-space or function-space views (Kanagawa et al.; 2025), where the observation and test locations have no unilateral dependency order, and this also explains the cubic scalability in computational costs for GP regressions. On the other hand, the dynamic view of Gaussian processes, while only available for a class of GP models, reformulates the dependency structure unilaterally (Whittle; 1954) to enable sequential inferences for GP regressions with computational costs that could scale linearly (Hartikainen and Sarkka;2010; Sarkka and Hartikainen; 2012) with little to no approximation. This talk explores this dynamic perspective of (Matern) Gaussian processes and some consequences of this perspective.

  • 2024-06-06: Unadjusted Barker as an SDE Numerical Scheme

Yuga Iguchi

1 talk
  • 2025-05-15: A Closed-Form Transition Density Expansion for Elliptic and Hypo-Elliptic SDEs

Giorgos Vasdekis

1 talk
  • 2025-12-04: Sampling with time-changed Markov processes
    Slides

    We introduce a framework of time-changed Markov processes to speed up the convergence of Markov chain Monte Carlo (MCMC) algorithms in the context of multimodal distributions and rare event simulation. The time-changed process is defined by adjusting the speed of time of a base process via a user-chosen, state-dependent function. We apply this framework to several Markov processes from the MCMC literature, such as Langevin diffusions and piecewise deterministic Markov processes, obtaining novel modifications of classical algorithms and also re-discovering known MCMC algorithms. We prove theoretical properties of the time-changed process under suitable conditions on the base process, focusing on connecting the stationary distributions and qualitative convergence properties such as geometric and uniform ergodicity, as well as a functional central limit theorem. Time permitting, we will compare our approach with the framework of space transformations, clarifying the similarities between the approaches. This is joint work with Andrea Bertazzi.

Lanya Yang

1 talk
  • 2025-11-20: Exchangeable Particle Gibbs for Markov Jump Processes
    Slides

    Inference in stochastic reaction-network models—such as the SEIR epidemic model or the Lotka–Volterra predator–prey system—is crucial for understanding the dynamics of interacting systems in epidemiology, ecology, and systems biology. These models are typically represented as Markov jump processes (MJPs) with intractable likelihoods. As a result, particle Markov chain Monte Carlo (particle MCMC) methods, particularly the Particle Gibbs (PG) sampler, have become standard tools for Bayesian inference. However, PG suffers from severe particle degeneracy, especially in high-dimensional state spaces, leading to poor mixing and inefficiency. In this talk, I focus on improving the efficiency of particle MCMC methods for inference in reaction networks by addressing the degeneracy problem. Building on recent work on the Exchangeable Particle Gibbs (xPG) sampler for continuous-state diffusions, this project develops a novel version of xPG tailored to discrete-state reaction networks, where randomness is driven by Poisson processes rather than Brownian motion. The proposed method retains the exchangeability framework of xPG while adapting it to the structural and computational challenges specific to reaction networks.

Rafael Izbicki

1 talk
  • 2025-09-04: Simulation‑Based Calibration of Confidence Sets for Statistical Models

Jixiang Qing

1 talk
  • 2025-08-07: Bayesian Optimization Over Graphs With Shortest-Path Encodings

Maciej Buze

1 talk
  • 2025-07-17: Barycenters in Unbalanced Optimal Transport

Takuo Matsubara

1 talk
  • 2025-06-19: Wasserstein Gradient Boosting: A Framework for Distribution-Valued Supervised Learning

Liam Llamazares Elias

1 talk
  • 2025-05-07: A Parameterization of Anisotropic Gaussian Fields With Penalized Complexity Priors

Richard Everitt

1 talk
  • 2025-03-06: ABC-SMC^2 and Ensemble Kalman Inversion ABC

Adrien Corenflos

1 talk
  • 2025-02-06: High-Dimensional Inference in State-Space Models via an Auxiliary Variable Trick

Tim Rogers

1 talk
  • 2025-01-30: Learning About Dynamical Systems with Gaussian Processes

Maximillian Steffen

1 talk
  • 2024-11-21: Statistical guarantees for stochastic Metropolis-Hastings

Claire Gormley

1 talk
  • 2024-06-20: Bayesian nonparametric modelling of network data

Saifuddin Syed

1 talk
  • 2024-06-13: Scaling inference of MCMC algorithms with parallel computing

Gabriel Wallin

1 talk
  • 2024-05-09: Rotation to Sparse Loadings using Lp Losses and Related Inference Problems

François-Xavier Briol

1 talk
  • 2024-04-11: Robust and conjugate Gaussian process regression

Leandro Marcolino

1 talk
  • 2024-03-21: Identifying Adversaries in Ad-hoc Domains Using Q-valued Bayesian Estimations

Theo Papamarkou

1 talk
  • 2024-03-14: Aspects of sampling-based inference for Bayesian neural networks

Francesco Barile

1 talk
  • 2024-02-22: Flexible modeling of heterogeneous populations of networks: a Bayesian nonparametric approach

Kamélia Daudel

1 talk
  • 2024-02-15: Alpha-divergence Variational Inference Meets Importance Weighted Auto-Encoders: Methodology and Asymptotics

Estevão Prado

2 talks
  • 2024-02-08: Accounting for shared covariates in semi-parametric Bayesian additive regression trees
  • 2023-10-19: Metropolis-Hastings with fast, flexible sub-sampling

Chris Jewell

1 talk
  • 2023-12-14: Data Augmentation MCMC on epidemic models

Marina Riabiz

1 talk
  • 2023-12-07: Optimal Thinning of MCMC Output

Andy Wang

1 talk
  • 2023-11-23: Comparison theorems for Hybrid Slice Sampling

Aretha Teckentrup

1 talk
  • 2023-11-02: Gaussian processes, inverse problems and Markov chain Monte Carlo

Sam Holdstock

1 talk
  • 2023-10-26: Improved inference for stochastic kinetic models with small observation error via partial Rao-Blackwellisation

Alberto Cabezas

1 talk
  • 2023-10-12: Composable Inference in BlackJAX

Alice Corbella

1 talk
  • 2023-06-15: The Lifebelt Particle Filter for robust estimation from low-valued count data

Francesca Panero

1 talk
  • 2023-06-08: Modelling sparse networks with Bayesian nonparametrics

Francesca Crucinio

1 talk
  • 2023-05-11: Divide-and-Conquer SMC with applications to high dimensional filtering

Victor Elvira

1 talk
  • 2023-02-23: State-Space Models as Graphs

Sam Livingstone

2 talks
  • 2023-02-16: Pre-conditioning in Markov chain Monte Carlo
  • 2019-09-20: On the robustness of gradient-based MCMC algorithms.

    We analyse the tension between robustness and efficiency for Markov chain Monte Carlo (MCMC) sampling algorithms. In particular, we focus on robustness of MCMC algorithms with respect to heterogeneity in the target and their sensitivity to tuning, an issue of great practical relevance but still understudied theoretically. We show that the spectral gap of the Markov chains induced by classical gradient-based MCMC schemes (e.g. Langevin and Hamiltonian Monte Carlo) decays exponentially fast in the degree of mismatch between the scales of the proposal and target distributions, while for the random walk Metropolis (RWM) the decay is linear.

Estevao Batista Do Prado

1 talk
  • 2023-01-26: Bayesian additive regression trees (BART)

Alberto Cabezas Gonzalez

2 talks
  • 2023-01-19: Stereographic Markov Chain Monte Carlo
  • 2022-03-24: Elliptical slice sampling

Alexander Terenin

1 talk
  • 2023-01-12: Pathwise Conditioning and Non-Euclidean Gaussian Processes

Yu Luo

1 talk
  • 2022-12-08: Bayesian estimation using loss functions

Sam Power

2 talks
  • 2022-11-24: Explicit convergence bounds for Metropolis Markov chains: isoperimetry, spectral gaps and profiles
  • 2021-11-25: Double Control Variates for Gradient Estimation in Discrete Latent Variable Models

Mauro Camara Escudero

1 talk
  • 2022-11-17: Approximate Manifold Sampling

Alexandros Beskos

1 talk
  • 2022-11-03: Manifold Markov chain Monte Carlo methods for Bayesian inference in diffusion models

Jure Vogrinc

1 talk
  • 2022-10-27: The Barker proposal: Combining robustness and efficiency in gradient-based MCMC

Michael Whitehouse

1 talk
  • 2022-10-06: Consistent and fast inference in compartmental models of epidemics using PAL

Steffen Grünewälder

1 talk
  • 2022-05-05: Compressed Empirical Measures (in finite dimensions)

Louis Sharrock

1 talk
  • 2022-03-31: Parameter Estimation for the McKean-Vlasov Stochastic Differential Equation

Lionel Riou-Durand

1 talk
  • 2022-02-17: Metropolis Adjusted Underdamped Langevin Trajectories: a robust alternative to Hamiltonian Monte-Carlo

Szymon Urbas

1 talk
  • 2022-01-13: The Apogee to Apogee Path Sampler

Gael Martin

1 talk
  • 2021-06-22: landmark papers: Bayesian computation from 1763 to the 21st Century

Phyllis Ju

1 talk
  • 2021-06-10: Sequential Monte Carlo algorithms for agent-based models of disease transmission

Christian P. Robert

1 talk
  • 2021-05-27: landmark papers: Harold Jeffreys’s Theory of Probability Revisited

Clement Lee

2 talks
  • 2021-04-29: landmark papers: The Gelman-Rubin statistic: old and new
  • 2019-06-13: Clustering approach and MCMC practicalities of stochastic block models.

    Stochastic block model (SBM) is a popular choice for clustering nodes in a network. In this talk, a few versions of SBM will be reviewed, with the focus on the clustering approach (hard vs soft), and its relation with the subsequent MCMC algorithm. Model selection and some practical issues will also be discussed.

George Bolt

1 talk
  • 2021-04-15: MCMC Sampling and Posterior Inference for a New Metric-Based Network Model

Jeremie Coullon

1 talk
  • 2021-03-25: landmark papers: the Metropolis sampler (1953)

Liam Hodgkinson

1 talk
  • 2020-11-19: Stein kernels

Leah South

2 talks
  • 2020-02-13: The kernel Stein discrepancy
  • 2019-08-01: Variance reduction in MCMC.

Francois-Xavier Briol

1 talk
  • 2019-11-24: Statistical Inference for Generative Models with Maximum Mean Discrepancy

    Likelihood-based inference and its variants provide a statistically efficient and widely applicable approach to parametric inference, their application to models involving intractable likelihoods poses challenges. In this work, we study a class of minimum distance estimators for intractable generative models, that is, statistical models for which the likelihood is intractable, but simulation is cheap. The distance considered, maximum mean discrepancy (MMD), is defined through the embedding of probability measures into a reproducing kernel Hilbert space.

Jeremias Knoblauch

1 talk
  • 2019-11-07: Generalized variational inference
    Slides

    In this talk, I introduce a generalized representation of Bayesian inference. It is derived axiomatically, recovering existing Bayesian methods as special cases. It is then used to prove that variational inference (VI) based on the Kullback-Leibler Divergence with a variational family Q produces the optimal Q-constrained approximation to the exact Bayesian inference problem. Surprisingly, this implies that standard VI dominates any other Q-constrained approximation to the exact Bayesian inference problem. This means that alternative Q-constrained approximations such as VI minimizing other divergences and Expectation Propagation can produce better posteriors than VI only by implicitly targeting more appropriate Bayesian inference problems.

Magnus Rattray

1 talk
  • 2019-10-09: Using Gaussian processes to infer pseudotime and branching from single-cell data.

    I will describe some applications of Gaussian process models to single-cell data. We have developed a scalable implementation of the Gaussian process latent variable model (GPLVM) that can be used for pseudotime estimation when there is prior knowledge about pseudotime, e.g. from capture times available in single-cell time course data [1]. Other dimensions of the GPLVM latent space can then be used to model additional sources of variation, e.g. from branching of cells into different lineages.

Nick Tawn

1 talk
  • 2019-05-09: The Annealed Leap Point Sampler (ALPS) for multimodal target distributions.

    Sampling from multimodal target distributions is a classical challenging problem. Markov Chain Monte Carlo methods typically rely on localised or gradient based proposal mechanisms and so target distributions exhibiting multimodality mean the chain becomes trapped in a local mode and this results in a bias sample output. This talk introduces a novel algorithm, ALPS, that is designed to provide a scalable approach to sampling from multimodal target distributions. The ALPS algorithm concatenates a number of the strengths of the current gold standard approaches for multimodality.

Callum Vyner

1 talk
  • 2019-03-28: An Introduction to Divide-and-Conquer MCMC.

Matthew Ludkin

1 talk
  • 2019-02-28: Hug ‘N’ Hop: Explicit, non-reversible, contour-hugging MCMC.

Arnaud Doucet

1 talk
  • 2018-12-13: On discrete-time piecewise-deterministic MCMC schemes

Louis Aslett

1 talk
  • 2018-12-05: Privacy and Security in Bayesian Inference

Luke Kelly

1 talk
  • 2017-11-09: Lateral trait transfer in phylogenetic inference

    We are interested in inferring the phylogeny, or shared ancestry, of a set of species descended from a common ancestor. When traits pass vertically through ancestral relationships, the phylogeny is a tree and one can often compute the likelihood efficiently through recursions. Lateral transfer, whereby evolving species exchange traits outside of ancestral relationships, is a frequent source of model misspecification in phylogenetic inference. We propose a novel model of species diversification which explicitly controls for the effect of lateral transfer.

Yee Whye Teh

1 talk
  • 2017-11-01: On Bayesian Deep Learning and Deep Bayesian Learning

    Probabilistic and Bayesian reasoning is one of the principle theoretical pillars to our understanding of machine learning. Over the last two decades, it has inspired a whole range of successful machine learning methods and influenced the thinking of many researchers in the community. On the other hand, in the last few years the rise of deep learning has completely transformed the field and led to a string of phenomenal, era-defining, successes.

Jack Baker

2 talks
  • 2016-12-06: An overview of Bayesian non-parametrics
  • 2015-12-10: STAN

Adam Johansen

1 talk
  • 2016-07-02: The iterated auxiliary particle filter

Joris Bierkens

1 talk
  • 2016-05-09: Super-efficient sampling using Zig Zag Monte Carlo

Peter Neal

1 talk
  • 2016-03-17: Optimal scaling of the independence sampler

Borja de Balle Pigem

1 talk
  • 2016-02-18: Differentially Private Policy Evaluation

James Hensman

1 talk
  • 2015-10-15: Variational inference in Gaussian process models

Alexandre Thiery

1 talk
  • 2015-05-19: Asymptotic Analysis of Random-Walk Metropolis on Ridged Densities

Vasileios Maroulas

1 talk
  • 2014-02-25: Filtering, drift homotopy and target tracking

Anthony Lee

1 talk
  • 2013-04-17: Particle Markov chain Monte Carlo and marginal likelihood estimation: strategies for improvement.

Joe Mellor

1 talk
  • 2013-02-20: Thompson Sampling in Switching Environments with Bayesian Online Change Point Detection

Nicos Pavlidis

1 talk
  • 2012-06-21: Classification in Dynamic Streaming Environments

Haeran Cho

1 talk
  • 2011-11-16: High-dimensional variable selection via tilting

Neil Drummond

1 talk
  • 2011-02-15: Quantum Monte Carlo

Rebecca Killick

1 talk
  • 2011-01-18: Optimal detection of changepoints with a linear computational cost

Krzysztof Latuszynski

1 talk
  • 2010-11-10: Exact Inference for a Markov switching diffusion model with discretely observed data

Anastasia Lykou

1 talk
  • 2010-11-03: Bayesian variable selection using Lasso

Giorgos Sermaidis

1 talk
  • 2009-11-03: Likelihood based inference for discretely observed diffusions

Ben Taylor

1 talk
  • 2008-06-02: Adaptive Sequential Monte Carlo Methods For Static Inference in Bayesian Mixture Analysis

Joe Whittaker

1 talk
  • 2008-05-13: The linear least squares prediction view of conjugate gradients

Hongsheng Dai

1 talk
  • 2008-03-18: Perfect sampling for Random Trees

Paul Smith

1 talk
  • 2008-02-05: Bayesian Analysis of ARMA and Transfer Function Time Series Models

Thomas Jaki

1 talk
  • 2007-11-21: Asymptotic simultaneous bootstrap confidence bounds for simple linear regression lines