Profile avatar
ltronneberg.bsky.social
Postdoctoral fellow @ University of Oslo (back home in 🏔️🇳🇴🏔️), Bayesian stats (high-dimensional, nonparametric, ML), biostatistics, computation. Previously @ MRC Biostatistics Unit, University of Cambridge 🇬🇧
28 posts 527 followers 595 following
Regular Contributor
Conversation Starter
comment in response to post
I’d also be interested in this, Martin. How it is structured, what is covered, what the pre-requisites are etc
comment in response to post
I think that could indeed be quite illuminating. Often we approach these topics trying to make them practical without requiring too much mathematical setup. You don’t *need* stochastic process theory to get at the core of GP regression, but you might need it to understand the finer details later on
comment in response to post
Interesting post! I helped run a similar module in Cambridge last year were we tried to cram in too much, covering basics of GPs *and* DPs. If the emphasis is on BNP I think one must mention DPs at some point, though it is a step up in abstraction compared to GPs. «distribution of distributions» etc
comment in response to post
It can be bounded from below and above, by \sqrt{n} and n, respectively. These cases each reflect say an extreme lengthscale, ell=\infty or ell=0, and so it appears almost like a measure of complexity. I'm struggling to come up with an intuitive explanation. Does anyone have any ideas/references?
comment in response to post
Maybe I'll add that the call is very general across "computational science", encompassing biology, chemistry, physics etc. alongside maths & stats. Bayesian machine learning is just one of the projects within our department, see the link below www.uio.no/dscience/eng...
comment in response to post
Are you assuming you are able to compute the derivatives as well, or simply that you know they are non-negative?
comment in response to post
Overbygningen er vel laget av stål..?
comment in response to post
It's quite ironic that the term was first coined in a sociological essay warning *against* meritocracy, rather than lifting it up as an ideal.
comment in response to post
Yeah, I think for most applications something like a GAM or a GP would be the practical choice compared to a BNN. I'm wondering if the number of layers/parameters needed to get something truly more flexible is just too large to work with
comment in response to post
Hmm, from an initial check __not really__. I initially only fit a single chain, so figured maybe I was exploring a single mode, but when I reran the model with 4 chains I still don't really see much multimodality. Maybe n is so small that the prior dominates?
comment in response to post
comment in response to post
Also, in terms of the sampler complaining, Rhat and effective sample size is awful for most of the parameters (weights), but for the estimated function itself at the input locations it is generally Rhat < 1.02 and n_eff > 300 from 1000 samples. Nice to see these effects myself
comment in response to post
This is a fairly small BNN, "only" ~8k parameters. But given my 100 datapoints, I'm a little surprised that Stan is so efficient. Also a bit disappointed in how smooth the functions look (when sampling from prior), a GP would make quick work of this, even in a full Bayesian treatment.
comment in response to post
«I love Abba». The delivery of this is so deadpan hilarious
comment in response to post
I am to! Ignored every warning of it being crazy addictive. So good
comment in response to post
I've sort of always thought of that as the difference between submitting something to a conference, or writing a longer full paper for a journal, depending on how deep it goes. Not to say that conference papers can't contain deep ideas and novelty, and for some fields is the main publishing channel
comment in response to post
Some of us care about GPs!
comment in response to post
Renate Meyer gave a really great talk involving Whittle likelihoods at the BSU earlier this year. There the Whittle likelihood was used as a "jumping off" point and nonparametrically adjusted in the frequency domain for modelling time series (www.mrc-bsu.cam.ac.uk/events/bsu-s...), very clever!
comment in response to post
🙋‍♂️ I'd like to join, working on Bayes ML/Stats