| 19 Feb 2026 |
Chris Sherlock
Lancaster University
|
Robust, partially alive particle Metropolis-Hastings via the Frankenfilter
|
|
|
When a hidden Markov model permits the conditional likelihood of an observation given the hidden process to be zero, all particle simulations from one observation time to the next could produce zeros. If so, the filtering distribution cannot be estimated and the estimated parameter likelihood is zero. The alive particle filter addresses this by simulating a random number of particles for each inter-observation interval, stopping after a target number of non-zero conditional likelihoods. For outlying observations or poor parameter values, a non-zero result can be extremely unlikely, and computational costs prohibitive. We introduce the Frankenfilter, a principled, partially alive particle filter that targets a user-defined amount of success whilst fixing lower and upper bounds on the number of simulations. The Frankenfilter produces unbiased estimators of the likelihood, suitable for pseudo-marginal Metropolis--Hastings (PMMH). We demonstrate that PMMH with the Frankenfilter is more robust to outliers and mis-specified initial parameter values than PMMH using standard particle filters, and is typically at least 2-3 times more efficient. We also provide advice for choosing the amount of success. In the case of n exact observations, this is particularly simple: target n successes.
|
| 12 Feb 2026 |
|
Informal session
|
|
| 5 Feb 2026 |
Andre Menezes
Maynooth University
|
Bayesian nonparametric models for zero-inflated count-compositional data using ensembles of regression trees.
|
|
|
Count-compositional data arise in many different fields, including high-throughput microbiome sequencing and palynology experiments, where a common, important goal is to understand how covariates relate to the observed compositions. Existing methods often fail to simultaneously address key challenges inherent in such data, namely: overdispersion, an excess of zeros, cross-sample heterogeneity, and nonlinear covariate effects. In this talk, we first present novel probabilistic portrayals of two multivariate models designed to handle zero-inflation in count-compositional data. Then, to address the above concerns, we propose novel Bayesian nonparametric models based on ensembles of regression trees. Specifically, we leverage the recently introduced zero-and-N-inflated multinomial distribution and assign independent nonparametric Bayesian additive regression tree (BART) priors to both the compositional and structural zero probability components of our model, to flexibly capture covariate effects. We further extend this by adding latent random effects to capture overdispersion and more general dependence structures among the categories. We develop an efficient inferential algorithm combining recent data augmentation schemes with established BART sampling routines. We evaluate our proposed models in simulation studies and illustrate their applicability with two case studies in microbiome and palaeoclimate modelling.
|
| 29 Jan 2026 |
William Laplante
University College London
|
Conjugate Generalised Bayesian Inference for Discrete Doubly Intractable Problems
|
|
|
Doubly intractable problems occur when both the likelihood and the posterior are available only in unnormalised form, with computationally intractable normalisation constants. Bayesian inference then typically requires direct approximation of the posterior through specialised and typically expensive MCMC methods. In this paper, we provide a computationally efficient alternative in the form of a novel generalised Bayesian posterior that allows for conjugate inference within the class of exponential family models for discrete data. We derive theoretical guarantees to characterise the asymptotic behaviour of the generalised posterior, supporting its use for inference. The method is evaluated on a range of challenging intractable exponential family models, including the Conway-Maxwell-Poisson graphical model of multivariate count data, autoregressive discrete time series models, and Markov random fields such as the Ising and Potts models. The computational gains are significant; in our experiments, the method is between 10 and 6000 times faster than state-of-the-art Bayesian computational methods.
|
| 22 Jan 2026 |
Yuga Iguchi
Lancaster University
|
Dynamical regimes of denoising diffusion models for sampling from multimodal distributions
|
|
|
I will discuss the mechanism of denoising diffusion models (DDMs) for sampling from multimodal distributions on $\mathbb{R}^d$. The first part of the talk will review the basics of DDMs — from discrete Markov chains to continuous-time formulations via SDEs. Then, using a mixture of two Gaussians as a canonical example of a multimodal target, I will describe how DDMs gradually transform the initial prior (a standard Gaussian) into the bimodal target distribution. In particular, I will show analytically that denoising trajectories dynamically change their behaviour during sampling, and that the denoising procedure can be characterised roughly by three stages: 1. Early stage — Contraction; 2. Intermediate stage — Expansion (contraction is lost); 3. Final stage — local attraction to a single mode, possibly contracting again locally. I will also clarify how these stages depend on properties of the target distribution, such as dimension, separation between modes, and the variances of the mixture components. This talk is based on ongoing joint work with Paul Fearnhead.
|
| 15 Jan 2026 |
Hefin Lambley
University of Warwick
|
Autoencoders in function space
|
Slides
|
|
We propose function-space versions of autoencoders—machine-learning methods for dimension reduction and generative modelling—in both their deterministic (FAE) and variational (FVAE) forms. Formulating autoencoder objectives in function space enables training and evaluation with data discretised at arbitrary resolutions, leading to new applications such as inpainting, superresolution, and generative modelling. We discuss the technical challenges of formulating autoencoders in infinite dimension. A key issue is that FVAE's variational inference is often ill defined, unlike in finite dimensions, limiting its applicability. We then explore specific problem classes where FVAE remains useful. We contrast this with the FAE objective, which remains well defined in many situations where FVAE fails, making it a robust and versatile alternative. We demonstrate both methods on scientific data sets, including Navier--Stokes fluid flow simulations. This is joint work with Justin Bunker and Mark Girolami (Cambridge), Andrew M. Stuart (Caltech) and T. J. Sullivan (Warwick).
|