Profile avatar
mniepert.bsky.social
Professor @ University of Stuttgart, Scientific Advisor @ NEC Labs, GraphML, geometric deep learning, ML for Science and Simulations. Formerly @IUBloomington and @uwcse
8 posts 1,172 followers 355 following
Regular Contributor

The slides for my lectures on (Bayesian) Active Learning, Information Theory, and Uncertainty are online now 🥳 They cover quite a bit from basic information theory to some recent papers: blackhc.github.io/balitu/ and I'll try to add proper course notes over time 🤗

Want to turn your state-of-the-art diffusion models into ultra-fast few-step generators? 🚀 Learn how to optimize your time discretization strategy—in just ~10 minutes! ⏳✨ Check out how it's done in our Oral paper at ICLR 2025 👇

Welcome to our Bluesky account! 🦋 We're excited to announce ComBayNS workshop: Combining Bayesian & Neural Approaches for Structured Data 🌐 Submit your paper and join us in Rome for #IJCNN2025! 🇮🇹 📅 Papers Due: March 20th, 2025 📜 Webpage: combayns2025.github.io

🚀 Exciting news! Our paper "Learning to Discretize Diffusion ODEs" has been accepted as an Oral at #ICLR2025! 🎉 [1/n] We propose LD3, a lightweight framework that learns the optimal time discretization for sampling from pre-trained Diffusion Probabilistic Models (DPMs).

Very excited to announce the Neurosymbolic Generative Models special track at NeSy 2025! Looking forward to all your submissions!

arxiv.org/abs/2412.11569, a very relevant effort!

Catch my poster tomorrow at the NeurIPS MLSB Workshop! We present a simple (yet effective 😁) multimodal Transformer for molecules, supporting multiple 3D conformations & showing promise for transfer learning. Interested in molecular representation learning? Let’s chat 👋!

We will run out of data for pretraining and see diminishing returns. In many application domains such as in the sciences we also have to be very careful on what data we pretrain to be effective. It is important to adaptively generate new data from physical simulators. Excited about the work below

I'll present our paper in the afternoon poster session at 4:30pm - 7:30 pm in East Exhibit Hall A-C, poster 3304!

Neural surrogates can accelerate PDE solving but need expensive ground-truth training data. Can we reduce the training data size with active learning (AL)? In our NeurIPS D3S3 poster, we introduce AL4PDE, an extensible AL benchmark for autoregressive neural PDE solvers. 🧵

Join us today at #NeurIPS2024 for our poster presentation: Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing 🗓️ When: Wed, Dec 11, 11 a.m. – 2 p.m. PST 📍 Where: East Exhibit Hall A-C, Poster #4107 #MachineLearning #InteratomicPotentials #Equivariance #GraphNeuralNetworks

"Transferability of atom-based neural networks" authored by @januseriksen.bsky.social (thanks for publishing with us, amazing work!) is now out as part of the #QuantumChemistry and #ArtificialIntelligence focus collection #MachineLearningScienceandTechnology. Link: iopscience.iop.org/article/10.1...

1/6 We're excited to share our #NeurIPS2024 paper: Probabilistic Graph Rewiring via Virtual Nodes! It addresses key challenges in GNNs, such as over-squashing and under-reaching, while reducing reliance on heuristic rewiring. w/ Chendi Qian, @christophermorris.bsky.social @mniepert.bsky.social 🧵

New #compchem paper out in MLST. We study the transferability of both invariant and equivariant neural networks when training these either exclusively on total molecular energies or in combination with data from different atomic partitioning schemes: iopscience.iop.org/article/10.1...

You should take a look at this if you want to know how to use Cartesian (instead of spherical) tensors for building equivariant MLIPs.

📣 Can we go beyond state-of-the-art message-passing models based on spherical tensors such as #MACE and #NequIP? Our #NeurIPS2024 paper explores higher-rank irreducible Cartesian tensors to design equivariant #MLIPs. Paper: arxiv.org/abs/2405.14253 Code: github.com/nec-research...

@ropeharz.bsky.social forced me to do this starter pack on #tractable #probabilistic modeling and #reasoning in #AI and #ML please write below if you want to be added (and sorry if I did not find you from the beginning). go.bsky.app/DhVNyz5

Amazing opportunity for #Neurosymbolic folks! 🚨🚨🚨 We are looking for a Tenure Track Prof for the 🇦🇹 #FWF Cluster of Excellence Bilateral AI (think #NeSy ++) www.bilateral-ai.net A nice starting pack for fully funded PhDs is included. jobs.tugraz.at/en/jobs/226f...

I haven’t read it carefully, but +1 to works like the one below. It mentions learning artifacts from discreetness. We saw some things like that in this paper, where bad integration of the true Hamiltonian did worse than a learned model (that absorbed artifacts). arxiv.org/abs/1909.12790

We've all been there 🤓 #DeepLearning

For those who missed this post on the-network-that-is-not-to-be-named, I made public my "secrets" for writing a good CVPR paper (or any scientific paper). I've compiled these tips of many years. It's long but hopefully it helps people write better papers. perceiving-systems.blog/en/post/writ...

Can deep learning finally compete with boosted trees on tabular data? 🌲 In our NeurIPS 2024 paper, we introduce RealMLP, a NN with improvements in all areas and meta-learned default parameters. Some insights about RealMLP and other models on large benchmarks (>200 datasets): 🧵