| 11 Dec 2025 |
Zheyang Shen
Newcastle University
|
A Computable Measure of Suboptimality for Entropy-Regularised Variational Objectives
|
Slides
|
|
Several emerging post-Bayesian methods target a probability distribution for which an entropy-regularised variational objective is minimised. This increased flexibility introduces a computational challenge, as one loses access to an explicit unnormalised density for the target. To mitigate this difficulty, we introduce a novel measure of suboptimality called 'gradient discrepancy', and in particular a 'kernel' gradient discrepancy (KGD) that can be explicitly computed. In the standard Bayesian context, KGD coincides with the kernel Stein discrepancy (KSD), and we obtain a novel characterisation of KSD as measuring the size of a variational gradient. Outside this familiar setting, KGD enables novel sampling algorithms to be developed and compared, even when unnormalised densities cannot be obtained. To illustrate this point several novel algorithms are proposed and studied, including a natural generalisation of Stein variational gradient descent, with applications to mean-field neural networks and predictively oriented posteriors presented. On the theoretical side, our principal contribution is to establish sufficient conditions for desirable properties of KGD, such as continuity and convergence control.
|
| 4 Dec 2025 |
Giorgos Vasdekis
|
Sampling with time-changed Markov processes
|
Slides
|
|
We introduce a framework of time-changed Markov processes to speed up the convergence of Markov chain Monte Carlo (MCMC) algorithms in the context of multimodal distributions and rare event simulation. The time-changed process is defined by adjusting the speed of time of a base process via a user-chosen, state-dependent function. We apply this framework to several Markov processes from the MCMC literature, such as Langevin diffusions and piecewise deterministic Markov processes, obtaining novel modifications of classical algorithms and also re-discovering known MCMC algorithms. We prove theoretical properties of the time-changed process under suitable conditions on the base process, focusing on connecting the stationary distributions and qualitative convergence properties such as geometric and uniform ergodicity, as well as a functional central limit theorem. Time permitting, we will compare our approach with the framework of space transformations, clarifying the similarities between the approaches. This is joint work with Andrea Bertazzi.
|
| 20 Nov 2025 |
Lanya Yang
Lancaster University
|
Exchangeable Particle Gibbs for Markov Jump Processes
|
Slides
|
|
Inference in stochastic reaction-network models—such as the SEIR epidemic model or the Lotka–Volterra predator–prey system—is crucial for understanding the dynamics of interacting systems in epidemiology, ecology, and systems biology. These models are typically represented as Markov jump processes (MJPs) with intractable likelihoods. As a result, particle Markov chain Monte Carlo (particle MCMC) methods, particularly the Particle Gibbs (PG) sampler, have become standard tools for Bayesian inference. However, PG suffers from severe particle degeneracy, especially in high-dimensional state spaces, leading to poor mixing and inefficiency. In this talk, I focus on improving the efficiency of particle MCMC methods for inference in reaction networks by addressing the degeneracy problem. Building on recent work on the Exchangeable Particle Gibbs (xPG) sampler for continuous-state diffusions, this project develops a novel version of xPG tailored to discrete-state reaction networks, where randomness is driven by Poisson processes rather than Brownian motion. The proposed method retains the exchangeability framework of xPG while adapting it to the structural and computational challenges specific to reaction networks.
|
| 30 Oct 2025 |
Rui Zhang
Lancaster University
|
A Dynamic Perspective of Matern Gaussian Processes
|
Slides
HTML Slides
|
|
The ubiquitous Gaussian process (GP) models in statistics and machine learning (Williams and Rasmussen; 2006) are static by default, either using the weight-space or function-space views (Kanagawa et al.; 2025), where the observation and test locations have no unilateral dependency order, and this also explains the cubic scalability in computational costs for GP regressions. On the other hand, the dynamic view of Gaussian processes, while only available for a class of GP models, reformulates the dependency structure unilaterally (Whittle; 1954) to enable sequential inferences for GP regressions with computational costs that could scale linearly (Hartikainen and Sarkka;2010; Sarkka and Hartikainen; 2012) with little to no approximation. This talk explores this dynamic perspective of (Matern) Gaussian processes and some consequences of this perspective.
|
| 16 Oct 2025 |
Henry Moss
Lancaster University
|
GPGreen: Linear Operator Learning with Gaussian Processes
|
|
| 4 Sep 2025 |
Rafael Izbicki
Federal University of São Carlos, Brazil
|
Simulation‑Based Calibration of Confidence Sets for Statistical Models
|
|
| 7 Aug 2025 |
Jixiang Qing
Imperial College London
|
Bayesian Optimization Over Graphs With Shortest-Path Encodings
|
|
| 17 Jul 2025 |
Maciej Buze
|
Barycenters in Unbalanced Optimal Transport
|
|
| 3 Jul 2025 |
Henry Moss
|
Fusing Neural and Statistical Models
|
|
| 19 Jun 2025 |
Takuo Matsubara
University of Edinburgh
|
Wasserstein Gradient Boosting: A Framework for Distribution-Valued Supervised Learning
|
NeurIPS
|
| 12 Jun 2025 |
Augustin Chevallier
University of Strasbourg
|
Towards Practical PDMP Sampling: Metropolis Adjustments, Locally Adaptive Step-Sizes, and NUTS-Based Time Lengths
|
|
| 29 May 2025 |
Dennis Prangle
University of Bristol
|
Distilling Importance Sampling for Likelihood Free Inference
|
JCGS
|
| 15 May 2025 |
Yuga Iguchi
|
A Closed-Form Transition Density Expansion for Elliptic and Hypo-Elliptic SDEs
|
|
| 7 May 2025 |
Liam Llamazares Elias
|
A Parameterization of Anisotropic Gaussian Fields With Penalized Complexity Priors
|
|
| 20 Mar 2025 |
Chris Nemeth
|
Can ODEs Make Monte Carlo Methods Great Again?
|
|
| 6 Mar 2025 |
Richard Everitt
University of Warwick
|
ABC-SMC^2 and Ensemble Kalman Inversion ABC
|
|
| 20 Feb 2025 |
Paul Fearnhead
|
Optimised Annealed Sequential Monte Carlo Samplers
|
|
| 6 Feb 2025 |
Adrien Corenflos
University of Warwick
|
High-Dimensional Inference in State-Space Models via an Auxiliary Variable Trick
|
|
| 30 Jan 2025 |
Tim Rogers
University of Sheffield
|
Learning About Dynamical Systems with Gaussian Processes
|
|
| 10 Dec 2024 |
Lorenzo Rimella
University of Turin
|
Categorical Approximate Likelihood for individual-based models
|
|
| 28 Nov 2024 |
Connie Trojan
|
Diffusion Generative Modelling for Divide-and-Conquer MCMC
|
|
| 21 Nov 2024 |
Maximillian Steffen
Karlsruhe Institute of Technology
|
Statistical guarantees for stochastic Metropolis-Hastings
|
|
| 27 Jun 2024 |
Chris Sherlock
|
Tuning pseudo-marginal Metropolis-Hastings: a vase or two faces?
|
|
| 20 Jun 2024 |
Claire Gormley
University College Dublin
|
Bayesian nonparametric modelling of network data
|
|
| 13 Jun 2024 |
Saifuddin Syed
University of Oxford
|
Scaling inference of MCMC algorithms with parallel computing
|
JRSSB
|
| 6 Jun 2024 |
Rui Zhang
|
Unadjusted Barker as an SDE Numerical Scheme
|
|
| 16 May 2024 |
Wentao Li
|
Optimal combination of composite likelihoods using approximate Bayesian computation with application to state-space models
|
|
| 9 May 2024 |
Gabriel Wallin
|
Rotation to Sparse Loadings using Lp Losses and Related Inference Problems
|
|
| 11 Apr 2024 |
François-Xavier Briol
|
Robust and conjugate Gaussian process regression
|
|
| 21 Mar 2024 |
Leandro Marcolino
|
Identifying Adversaries in Ad-hoc Domains Using Q-valued Bayesian Estimations
|
AAMAS
|
| 14 Mar 2024 |
Theo Papamarkou
|
Aspects of sampling-based inference for Bayesian neural networks
|
|
| 7 Mar 2024 |
Tamas Papp
|
Simulating the independence sampler parallel-in-time
|
|
| 22 Feb 2024 |
Francesco Barile
|
Flexible modeling of heterogeneous populations of networks: a Bayesian nonparametric approach
|
|
| 15 Feb 2024 |
Kamélia Daudel
|
Alpha-divergence Variational Inference Meets Importance Weighted Auto-Encoders: Methodology and Asymptotics
|
JMLR
|
| 8 Feb 2024 |
Estevão Prado
|
Accounting for shared covariates in semi-parametric Bayesian additive regression trees
|
|
| 1 Feb 2024 |
Lorenzo Rimella
|
A State-Space Perspective on Modelling and Inference for Online Skill Rating
|
|
| 14 Dec 2023 |
Chris Jewell
|
Data Augmentation MCMC on epidemic models
|
|
| 7 Dec 2023 |
Marina Riabiz
|
Optimal Thinning of MCMC Output
|
JRSSB
|
| 30 Nov 2023 |
Lorenzo Rimella
|
Simulation Based Composite Likelihood
|
|
| 23 Nov 2023 |
Andy Wang
|
Comparison theorems for Hybrid Slice Sampling
|
|
| 16 Nov 2023 |
Chris Sherlock
|
Ensemble Kalman filter
|
|
| 9 Nov 2023 |
Chris Nemeth
|
Bayesian Flow Networks
|
|
| 2 Nov 2023 |
Aretha Teckentrup
|
Gaussian processes, inverse problems and Markov chain Monte Carlo
|
|
| 26 Oct 2023 |
Sam Holdstock
|
Improved inference for stochastic kinetic models with small observation error via partial Rao-Blackwellisation
|
|
| 19 Oct 2023 |
Estevão Prado
|
Metropolis-Hastings with fast, flexible sub-sampling
|
|
| 12 Oct 2023 |
Alberto Cabezas
|
Composable Inference in BlackJAX
|
|
| 29 Jun 2023 |
Tamas Papp
|
Introduction to diffusion generative models
|
|
| 22 Jun 2023 |
Chris Sherlock
|
Fast return-level estimates for flood insurance via an improved Bennett inequality for random variables with differing upper bounds
|
Ongoing work
|
| 15 Jun 2023 |
Alice Corbella
University of Warwick
|
The Lifebelt Particle Filter for robust estimation from low-valued count data
|
|
| 8 Jun 2023 |
Francesca Panero
London School of Economics
|
Modelling sparse networks with Bayesian nonparametrics
|
Ongoing work
|
| 18 May 2023 |
Lorenzo Rimella
|
Localised filtering algorithm: the BPF and the Graph Filter
|
Link
Link
|
| 11 May 2023 |
Francesca Crucinio
ENSAE
|
Divide-and-Conquer SMC with applications to high dimensional filtering
|
Statistica Sinica
|
| 27 Apr 2023 |
Paul Fearnhead
|
Automatic Differentiation of Programs with Discrete Randomness
|
NeurIPS
|
| 2 Mar 2023 |
Chris Sherlock
|
KSD for dummies
|
|
| 23 Feb 2023 |
Victor Elvira
University of Edinburgh
|
State-Space Models as Graphs
|
|
| 16 Feb 2023 |
Sam Livingstone
University College London
|
Pre-conditioning in Markov chain Monte Carlo
|
Ongoing work
|
| 26 Jan 2023 |
Estevao Batista Do Prado
|
Bayesian additive regression trees (BART)
|
Link
Link
|
| 19 Jan 2023 |
Alberto Cabezas Gonzalez
|
Stereographic Markov Chain Monte Carlo
|
|
| 12 Jan 2023 |
Alexander Terenin
University of Cambridge
|
Pathwise Conditioning and Non-Euclidean Gaussian Processes
|
|
| 15 Dec 2022 |
Tamas Papp
|
Coupling MCMC algorithms in high dimensions
|
|
| 8 Dec 2022 |
Yu Luo
|
Bayesian estimation using loss functions
|
|
| 24 Nov 2022 |
Sam Power
University of Bristol
|
Explicit convergence bounds for Metropolis Markov chains: isoperimetry, spectral gaps and profiles
|
|
| 17 Nov 2022 |
Mauro Camara Escudero
University of Bristol
|
Approximate Manifold Sampling
|
|
| 3 Nov 2022 |
Alexandros Beskos
UCL
|
Manifold Markov chain Monte Carlo methods for Bayesian inference in diffusion models
|
|
| 27 Oct 2022 |
Jure Vogrinc
University of Warwick
|
The Barker proposal: Combining robustness and efficiency in gradient-based MCMC
|
|
| 20 Oct 2022 |
Paul Fearnhead
|
Martingale posterior distributions
|
|
| 6 Oct 2022 |
Michael Whitehouse
University of Bristol
|
Consistent and fast inference in compartmental models of epidemics using PAL
|
|
| 29 Sep 2022 |
Chris Sherlock
|
Comparison of Markov chains via weak Poincaré inequalities with application to pseudo-marginal MCMC
|
|
| 22 Sep 2022 |
Lorenzo Rimella
|
Inference in Stochastic Epidemic Models via Multinomial Approximations
|
|
| 15 Sep 2022 |
Chris Nemeth
|
Metropolis–Hastings via Classification
|
|
| 23 Jun 2022 |
Paul Fearnhead
|
Non-Reversible Parallel Tempering: a Scalable Highly Parallel MCMC Scheme
|
|
| 9 Jun 2022 |
Augustin Chevallier
|
Continuously-Tempered PDMP samplers
|
|
| 26 May 2022 |
Chris Sherlock
|
Scalable Importance Tempering and Bayesian Variable Selection
|
|
| 5 May 2022 |
Steffen Grünewälder
|
Compressed Empirical Measures (in finite dimensions)
|
|
| 31 Mar 2022 |
Louis Sharrock
University of Bristol
|
Parameter Estimation for the McKean-Vlasov Stochastic Differential Equation
|
|
| 24 Mar 2022 |
Alberto Cabezas Gonzalez
|
Elliptical slice sampling
|
|
| 17 Mar 2022 |
Augustin Chevallier
|
Slice sampling & PDMP
|
|
| 3 Mar 2022 |
Paul Fearnhead
|
Boost your favorite MCMC sampler using Kac’s theorem: the Kick-Kac teleportation algorithm- Part 2
|
|
| 24 Feb 2022 |
Paul Fearnhead
|
Boost your favorite MCMC sampler using Kac’s theorem: the Kick-Kac teleportation algorithm- Part 1
|
|
| 17 Feb 2022 |
Lionel Riou-Durand
University of Warwick
|
Metropolis Adjusted Underdamped Langevin Trajectories: a robust alternative to Hamiltonian Monte-Carlo
|
|
| 3 Feb 2022 |
Chris Sherlock
|
Statistical scalability and approximate inference in distributed computing environments
|
|
| 27 Jan 2022 |
Lorenzo Rimella
|
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
|
|
| 20 Jan 2022 |
Augustin Chevallier
|
Non-reversible guided Metropolis kernel
|
|
| 13 Jan 2022 |
Szymon Urbas
|
The Apogee to Apogee Path Sampler
|
|
| 9 Dec 2021 |
Chris Sherlock
|
Metropolis-Hastings with Averaged Acceptance Ratios
|
|
| 2 Dec 2021 |
Chris Nemeth
|
Waste-free sequential Monte Carlo
|
|
| 25 Nov 2021 |
Sam Power
University of Bristol
|
Double Control Variates for Gradient Estimation in Discrete Latent Variable Models
|
|
| 18 Nov 2021 |
Chris Nemeth
|
How do you tune MCMC algorithms?
|
|
| 4 Nov 2021 |
Augustin Chevallier
|
Approximations of Piecewise Deterministic Markov Processes and their convergence properties
|
|
| 28 Oct 2021 |
Paul Fearnhead
|
Multilevel Linear Models, Gibbs Samplers and Multigrid Decompositions
|
|
| 14 Oct 2021 |
Tamas Papp
|
Estimating Markov chain convergence with empirical Wasserstein distance bounds
|
|
| 22 Jun 2021 |
Gael Martin
Monash University
|
landmark papers: Bayesian computation from 1763 to the 21st Century
|
|
| 10 Jun 2021 |
Phyllis Ju
Harvard university
|
Sequential Monte Carlo algorithms for agent-based models of disease transmission
|
|
| 27 May 2021 |
Christian P. Robert
Université Paris-Dauphine
|
landmark papers: Harold Jeffreys’s Theory of Probability Revisited
|
|
| 13 May 2021 |
Lorenzo Rimella
|
Dynamic Bayesian Neural Networks
|
|
| 29 Apr 2021 |
Clement Lee
|
landmark papers: The Gelman-Rubin statistic: old and new
|
|
| 15 Apr 2021 |
George Bolt
|
MCMC Sampling and Posterior Inference for a New Metric-Based Network Model
|
|
| 25 Mar 2021 |
Jeremie Coullon
|
landmark papers: the Metropolis sampler (1953)
|
|
| 4 Mar 2021 |
Chris Sherlock
|
Differentiable Particle Filtering via Entropy-Regularized Optimal Transport
|
|
| 19 Nov 2020 |
Liam Hodgkinson
|
Stein kernels
|
|
| 3 Mar 2020 |
Chris Nemeth
|
Deep generative modelling: autoencoders, VAEs, GANs…. and all that jazz! Part 2
|
|
| 27 Feb 2020 |
Chris Nemeth
|
Deep generative modelling: autoencoders, VAEs, GANs…. and all that jazz!
|
|
| 13 Feb 2020 |
Leah South
|
The kernel Stein discrepancy
|
|
| 5 Dec 2019 |
Paul Fearnhead
|
Zig Zag Sampler
|
|
| 24 Nov 2019 |
Francois-Xavier Briol
|
Statistical Inference for Generative Models with Maximum Mean Discrepancy
|
|
|
Likelihood-based inference and its variants provide a statistically efficient and widely applicable approach to parametric inference, their application to models involving intractable likelihoods poses challenges. In this work, we study a class of minimum distance estimators for intractable generative models, that is, statistical models for which the likelihood is intractable, but simulation is cheap. The distance considered, maximum mean discrepancy (MMD), is defined through the embedding of probability measures into a reproducing kernel Hilbert space.
|
| 7 Nov 2019 |
Jeremias Knoblauch
|
Generalized variational inference
|
Slides
|
|
In this talk, I introduce a generalized representation of Bayesian inference. It is derived axiomatically, recovering existing Bayesian methods as special cases. It is then used to prove that variational inference (VI) based on the Kullback-Leibler Divergence with a variational family Q produces the optimal Q-constrained approximation to the exact Bayesian inference problem. Surprisingly, this implies that standard VI dominates any other Q-constrained approximation to the exact Bayesian inference problem. This means that alternative Q-constrained approximations such as VI minimizing other divergences and Expectation Propagation can produce better posteriors than VI only by implicitly targeting more appropriate Bayesian inference problems.
|
| 9 Oct 2019 |
Magnus Rattray
|
Using Gaussian processes to infer pseudotime and branching from single-cell data.
|
|
|
I will describe some applications of Gaussian process models to single-cell data. We have developed a scalable implementation of the Gaussian process latent variable model (GPLVM) that can be used for pseudotime estimation when there is prior knowledge about pseudotime, e.g. from capture times available in single-cell time course data [1]. Other dimensions of the GPLVM latent space can then be used to model additional sources of variation, e.g. from branching of cells into different lineages.
|
| 20 Sep 2019 |
Sam Livingstone
|
On the robustness of gradient-based MCMC algorithms.
|
|
|
We analyse the tension between robustness and efficiency for Markov chain Monte Carlo (MCMC) sampling algorithms. In particular, we focus on robustness of MCMC algorithms with respect to heterogeneity in the target and their sensitivity to tuning, an issue of great practical relevance but still understudied theoretically. We show that the spectral gap of the Markov chains induced by classical gradient-based MCMC schemes (e.g. Langevin and Hamiltonian Monte Carlo) decays exponentially fast in the degree of mismatch between the scales of the proposal and target distributions, while for the random walk Metropolis (RWM) the decay is linear.
|
| 1 Aug 2019 |
Leah South
|
Variance reduction in MCMC.
|
|
| 13 Jun 2019 |
Clement Lee
|
Clustering approach and MCMC practicalities of stochastic block models.
|
|
|
Stochastic block model (SBM) is a popular choice for clustering nodes in a network. In this talk, a few versions of SBM will be reviewed, with the focus on the clustering approach (hard vs soft), and its relation with the subsequent MCMC algorithm. Model selection and some practical issues will also be discussed.
|
| 9 May 2019 |
Nick Tawn
|
The Annealed Leap Point Sampler (ALPS) for multimodal target distributions.
|
|
|
Sampling from multimodal target distributions is a classical challenging problem. Markov Chain Monte Carlo methods typically rely on localised or gradient based proposal mechanisms and so target distributions exhibiting multimodality mean the chain becomes trapped in a local mode and this results in a bias sample output. This talk introduces a novel algorithm, ALPS, that is designed to provide a scalable approach to sampling from multimodal target distributions. The ALPS algorithm concatenates a number of the strengths of the current gold standard approaches for multimodality.
|
| 28 Mar 2019 |
Callum Vyner
|
An Introduction to Divide-and-Conquer MCMC.
|
|
| 28 Feb 2019 |
Matthew Ludkin
|
Hug ‘N’ Hop: Explicit, non-reversible, contour-hugging MCMC.
|
|
| 14 Feb 2019 |
Henry Moss
|
An Intro to Information-Driven Bayesian Optimisation
|
|
| 13 Dec 2018 |
Arnaud Doucet
|
On discrete-time piecewise-deterministic MCMC schemes
|
|
| 5 Dec 2018 |
Louis Aslett
|
Privacy and Security in Bayesian Inference
|
|
| 15 Nov 2018 |
Chris Sherlock
|
The Minimal Extended Statespace Algorithm for exact inference on Markov jump processes
|
|
| 7 Dec 2017 |
Gareth Ridall
|
Sequential Bayesian estimation and model selection
|
|
|
Work done in collaboration with Tony Pettitt from QUT Brisbane.I would like to: Introduce the Dirichlet form, which can be thought of as a generalisation of expected squared jumping distance, and show that the spectral gap has a variational representation over Dirichlet forms. Introduce the asymptotic variance of a Markov chain, which is the theoretical equivalent of the practical measure of 1/effective sample size, and provide a variational representation of this.
|
| 29 Nov 2017 |
Chris Nemeth
|
Pseudo-extended MCMC
|
|
|
MCMC algorithms are a class of exact methods used for sampling from target distributions. If the target is multimodal, MCMC algorithms often struggle to explore all of the modes of the target within a reasonable number of iterations. This issue can become even more pronounced when using efficient gradient-based samplers, such as HMC, which tend to tend to become trapped local modes. In this talk, I’ll outline how the pseudo-extended target, based on pseudo-marginal MCMC, can be used to improve the mixing of the HMC sampler by tempering the target distribution.
|
| 9 Nov 2017 |
Luke Kelly
|
Lateral trait transfer in phylogenetic inference
|
|
|
We are interested in inferring the phylogeny, or shared ancestry, of a set of species descended from a common ancestor. When traits pass vertically through ancestral relationships, the phylogeny is a tree and one can often compute the likelihood efficiently through recursions. Lateral transfer, whereby evolving species exchange traits outside of ancestral relationships, is a frequent source of model misspecification in phylogenetic inference. We propose a novel model of species diversification which explicitly controls for the effect of lateral transfer.
|
| 1 Nov 2017 |
Yee Whye Teh
|
On Bayesian Deep Learning and Deep Bayesian Learning
|
|
|
Probabilistic and Bayesian reasoning is one of the principle theoretical pillars to our understanding of machine learning. Over the last two decades, it has inspired a whole range of successful machine learning methods and influenced the thinking of many researchers in the community. On the other hand, in the last few years the rise of deep learning has completely transformed the field and led to a string of phenomenal, era-defining, successes.
|
| 18 May 2017 |
Chris Sherlock
|
Asymptotic variance and geometric convergence of MCMC: variational representations
|
|
|
An MCMC algorithm is geometrically ergodic if it converges to the intended posterior geometrically in the number of iterations. A number of useful properties follow from geometric ergodicity, including that the practical efficiency measure of “effective sample size” is meaningful for any sensible function of interest. The standard method for proving geometric ergodicity for a particular algorithm involves a “drift condition” and a “small set”, and can be time consuming, both in the proof itself and in understanding why the drift condition and small set are helpful.
|
| 26 Jan 2017 |
Chris Sherlock
|
Delayed-acceptance MCMC with examples: advantages and pitfalls and how to avoid the latter
|
|
|
When conducting MCMC using the Metropolis-Hastings algorithm the posterior distribution must be evaluated at the proposed point at every iteration; in many situations, however, the posterior is computationally expensive to evaluate. When a computationally cheap approximation to the posterior is also available, the delayed acceptance algorithm (aka surrogate transition method) can be used to increase the efficiency of the MCMC whilst still targeting the correct posterior. In the first part of this talk I will explain and justify the algorithm itself and overview a number of examples of its (successful) application.
|
| 6 Dec 2016 |
Jack Baker
|
An overview of Bayesian non-parametrics
|
|
| 11 Nov 2016 |
Wentao Li
|
Improved Convergence of Regression Adjusted Approximate Bayesian Computation
|
|
| 20 Oct 2016 |
Paul Fearnhead
|
The Scalable Langevin Exact Algorithm: Bayesian Inference for Big Data
|
|
| 2 Jul 2016 |
Adam Johansen
|
The iterated auxiliary particle filter
|
|
| 19 May 2016 |
Chris Sherlock
|
Pseudo-marginal MCMC using averages of unbiased estimators
|
|
| 9 May 2016 |
Joris Bierkens
University of Warwick
|
Super-efficient sampling using Zig Zag Monte Carlo
|
|
| 14 Apr 2016 |
Paul Fearnhead
|
Research opportunities with MCMC and Big Data
|
|
| 17 Mar 2016 |
Peter Neal
|
Optimal scaling of the independence sampler
|
|
| 25 Feb 2016 |
Paul Fearnhead
|
Continuous-Time Importance Sampling (and MCMC)
|
|
| 18 Feb 2016 |
Borja de Balle Pigem
|
Differentially Private Policy Evaluation
|
|
| 10 Dec 2015 |
Jack Baker
|
STAN
|
|
| 26 Nov 2015 |
Paul Fearnhead
|
Discussion of “The Bouncy Particle Sampler: A Non-Reversible Rejection-Free Markov Chain Monte Carlo Method”
|
Link
Link
|
| 15 Oct 2015 |
James Hensman
|
Variational inference in Gaussian process models
|
|
| 19 May 2015 |
Alexandre Thiery
National University of Singapore
|
Asymptotic Analysis of Random-Walk Metropolis on Ridged Densities
|
|
| 28 Apr 2015 |
Chris Sherlock
|
Delayed acceptance particle marginal random walk Metropolis algorithms and their optimisation
|
|
| 5 Mar 2015 |
Chris Nemeth
|
Bayesian Inference for Big Data: Current and Future Directions
|
|
| 18 Dec 2014 |
Wentao Li
|
Discussion of the RSS read paper: “Sequential Quasi Monte Carlo” by Mathieu Gerber and Nicolas Chopin.
|
|
| 28 Nov 2014 |
Chris Nemeth
|
Particle Metropolis adjusted Langevin algorithms
|
|
| 11 Mar 2014 |
Paul Fearnhead
|
Reparameterisations for Particle MCMC
|
|
| 25 Feb 2014 |
Vasileios Maroulas
University of Tennessee
|
Filtering, drift homotopy and target tracking
|
|
| 11 Dec 2013 |
Dennis Prangle
University of Bristol
|
Speeding ABC inference using early-stopping simulations
|
|
| 9 May 2013 |
Chris Sherlock
|
Properties and Optimisation of the Pseudo Marginal RWM.
|
|
| 17 Apr 2013 |
Anthony Lee
University of Warwick
|
Particle Markov chain Monte Carlo and marginal likelihood estimation: strategies for improvement.
|
|
| 22 Mar 2013 |
Dennis Prangle
|
Likelihood-free parameter estimation for state space models
|
|
| 20 Feb 2013 |
Joe Mellor
University of Manchester
|
Thompson Sampling in Switching Environments with Bayesian Online Change Point Detection
|
|
| 21 Jun 2012 |
Nicos Pavlidis
Lancaster University
|
Classification in Dynamic Streaming Environments
|
|
| 6 Jun 2012 |
Paul Fearnhead
|
Hamiltonian Monte Carlo: Beyond Kinetic Energy
|
|
| 22 May 2012 |
Chris Sherlock
|
Metropolis Adjusted Langevin Algorithm (MALA), simplified Manifold MALA, and Hamiltonian Monte Carlo: motivation, explanation and application
|
|
| 14 Feb 2012 |
Dennis Prangle
|
Summary statistics for ABC model choice
|
|
| 13 Dec 2011 |
Paul Fearnhead
|
Constructing summary statistics for approximate Bayesian computation: semi-automatic ABC
|
|
| 16 Nov 2011 |
Haeran Cho
London School of Economics
|
High-dimensional variable selection via tilting
|
|
| 17 Jun 2011 |
Gareth Ridall
|
Online inference and model selection using sequential Monte Carlo
|
|
| 24 May 2011 |
Chris Sherlock
|
Simulation of mixed speed biochemical reactions using the linear noise approximation
|
|
| 15 Mar 2011 |
Paul Fearnhead
|
Reading group on “An explicit link between Gaussian fields and Gaussian Markov random fields: The SPDE approach”
|
RSS
|
| 15 Feb 2011 |
Neil Drummond
Lancaster University
|
Quantum Monte Carlo
|
|
| 18 Jan 2011 |
Rebecca Killick
|
Optimal detection of changepoints with a linear computational cost
|
|
| 7 Dec 2010 |
Dennis Prangle
|
Using ABC for sequential Bayesian analysis
|
|
| 10 Nov 2010 |
Krzysztof Latuszynski
University of Warwick
|
Exact Inference for a Markov switching diffusion model with discretely observed data
|
|
| 3 Nov 2010 |
Anastasia Lykou
|
Bayesian variable selection using Lasso
|
|
| 10 Oct 2010 |
Paul Fearnhead
|
Reading group on “Riemann manifold Langevin and Hamiltonian Monte Carlo methods”
|
|
| 3 Sep 2010 |
Paul Fearnhead
|
Particle Filters for models with fixed parameters
|
|
| 16 Feb 2010 |
Gareth Ridall
|
Reading group on Particle MCMC and the pseudo marginal algorithm
|
|
| 1 Dec 2009 |
Chris Sherlock
|
The random walk Metropolis: general criteria for the 0.234 acceptance rate rule
|
|
| 3 Nov 2009 |
Giorgos Sermaidis
|
Likelihood based inference for discretely observed diffusions
|
|
| 20 Oct 2009 |
Paul Fearnhead
|
Sequential Importance Sampling for General Diffusion Models
|
|
| 28 Apr 2009 |
Chris Sherlock
Reading Group
|
The Integrated Nested Laplace Approximation of Rue et al. (2009)
|
RSS
|
| 2 Dec 2008 |
Paul Fearnhead
|
change point models and fault detection
|
|
| 28 Oct 2008 |
Chris Sherlock
|
Optimal scaling of the random walk Metropolis - Part 1
|
|
| 2 Jun 2008 |
Ben Taylor
|
Adaptive Sequential Monte Carlo Methods For Static Inference in Bayesian Mixture Analysis
|
|
| 13 May 2008 |
Joe Whittaker
|
The linear least squares prediction view of conjugate gradients
|
|
| 18 Mar 2008 |
Hongsheng Dai
|
Perfect sampling for Random Trees
|
|
| 4 Mar 2008 |
Dennis Prangle
|
An MCMC method for Approximate Bayesian Computation
|
|
| 5 Feb 2008 |
Paul Smith
|
Bayesian Analysis of ARMA and Transfer Function Time Series Models
|
|
| 28 Nov 2007 |
Chris Sherlock
|
Power sums of lognormals
|
|
| 21 Nov 2007 |
Thomas Jaki
|
Asymptotic simultaneous bootstrap confidence bounds for simple linear regression lines
|
|
| 31 Oct 2007 |
Paul Fearnhead
|
Using particle filters within MCMC
|
|