Profile avatar
spmontecarlo.bsky.social
Lecturer in Maths & Stats at Bristol. Interested in probabilistic + numerical computation, statistical modelling + inference. (he / him). Homepage: https://sites.google.com/view/sp-monte-carlo Seminar: https://sites.google.com/view/monte-carlo-semina
812 posts 2,177 followers 1,857 following
Regular Contributor
Active Commenter

Trundling northwards on my way to spend a week visiting Newcastle! Should be fun.

Great day! Definitely recommend checking out the talk recordings when they go online - loads of cool stuff on show.

OMW to Day 2 of the Post-Bayes Workshop (postbayes.github.io/workshop2025/) at UCL - should be an interesting day!

In slides from a recent talk - the { virtuous / vicious } cycle of filtering, smoothing, and parameter estimation in state space models.

looking forward to joining this for the next couple of days!

very nice-looking thesis! arxiv.org/abs/2505.07267 'Adaptive, Robust and Scalable Bayesian Filtering for Online Learning' - Gerardo Duran-Martin

Curious to hear from my stats and data science connections, when should one use Principal Component Analysis (PCA)? I don't know the answer, but I have some thoughts and I was hoping to get a vibe check from those more knowledgeable! Would appreciate a boost on this post.

very interesting talk: youtu.be/XLqrywnK-0A?... "A Spectral Theory of Volterra Eqns and Applications to ML of Material Dynamics" - George Stepaniants

Here's a fun set of calculations (not original to me, but often presented differently): Suppose that one has a Bayesian model for some data, in the form of a joint distribution over model parameters and observations. Typically, Bayesian inference happens directly with the posterior distribution.

One of the more peculiar ~counter-examples out there, for a probability measure satisfying the Talagrand T2 inequality (hence sub-Gaussian, satisfying Poincaré, and more), but not satisfying any logarithmic Sobolev inequality. Right in the gap in between - is it a nice probability measure or not?

crushing it on the free will machine (people.ischool.berkeley.edu/~nick/aarons...)

Big fan of this perspective:

It's a bit interesting in maths when you have results that are somehow so qualitatively self-evident that i) the big-picture implications are already clear, so that ii) working out the quantitative picture is not seen as being so important. I'm experiencing this a bit with multimodality in sampling.

Not bad, as silly initialisms go:

Enjoying reading this. Clarifies some nice connections between scoring rules, probabilistic divergences, convex analysis, and so on. Should read it even more closely, to be honest!

One of the things which I like a lot about complexity analyses of different algorithms is that they highlight the axes along which a problem can be difficult. The flip side is that when somebody proposes a new method, you should be able to identify the axis along which they expect to see benefits.

Essential details for any notes on Combinatorics:

Impressively bold introduction from Corless here: