vincefort.bsky.social
PI at Helmholtz AI, Faculty at TU Munich, Fellow at Zuse School for reliable AI, Branco Weiss Fellow, ELLIS Scholar.
Prev: Cambridge CBL, St John's College, ETH Zürich, Google Brain, Microsoft Research, Disney Research.
https://fortuin.github.io/
87 posts
6,152 followers
576 following
Prolific Poster
Conversation Starter
comment in response to
post
Seriously, please, lets kill this habit. I get why you're happy and I'm happy for you! But it's all creating a feeling like you need to be on many papers
comment in response to
post
The Romans fight Carthage at Cannae, but the Carthaginians have Giant War Chickens
comment in response to
post
I guess the extreme version of this would be a world where half of the time, I watch stuff like Squid Game, not because I actually like it but because everyone around me talks about it; and the other half, I watch some AI-generated shows that Netflix has produced just for me and nobody else?
comment in response to
post
That we likely don't have any idea what Bayes really looked like is one of the saddest history facts...
comment in response to
post
Congrats! 😊
comment in response to
post
I think @lawrennd.bsky.social has proposed "anthroxing"
comment in response to
post
This was a really awesome collaboration with Tristan Cinquin, Marvin Pförtner, @philipphennig.bsky.social, and Robert Bamler!
comment in response to
post
To learn more, visit our poster at #NeurIPS2024
📅 Thursday, 4:30 PM
📍Poster #3907
Or read the paper here: arxiv.org/abs/2407.13711
comment in response to
post
🌊 We show that this can improve performance over standard Laplace with weight-space priors in real-world scientific tasks, such as this ocean current modeling problem
comment in response to
post
💡 In our work, we propose to use the Laplace approximation in function space! This is mathematically principled (after a bit of measure theory) and can be efficiently implemented using matrix-free linear algebra 🚀
comment in response to
post
Naively, one might just try to use variational inference to train a BNN with a GP prior, but it has been pointed out that this leads to some mathematical issues (infinite KL, etc.): arxiv.org/abs/2011.09421
comment in response to
post
🤔 One strength of Bayesian methods is the incorporation of prior knowledge, but it is not trivial to come up with a meaningful prior for BNNs in weight space...
📈 However, in function space, we often have prior beliefs, e.g., in the form of Gaussian processes!
comment in response to
post
This was a super fun collaboration, led by my master's student Rayen Dhahri with Alex Immer,
@bertrand-sharp.bsky.social, and Stephan Günnemann
comment in response to
post
To learn more, visit our poster next week at #NeurIPS2024
📅 Wed, Dec 11 | 11 AM - 2 PM PST
📍 East Exhibit Hall A-C #4110, Vancouver Convention Centre
📄 Or check out the paper here: arxiv.org/abs/2402.15978
comment in response to
post
🤔 How efficient can sparsification be without sacrificing performance?
☝️We showcase significant computational savings while retaining high performance across different sparsity levels:
📈 Achieves up to 20x computational savings with minimal accuracy degradation