Profile avatar
mathurinmassias.bsky.social
Tenured Researcher @INRIA, Ockham team. Teacher @Polytechnique and @ENSdeLyon Machine Learning, Python and Optimization
14 posts 552 followers 81 following
Regular Contributor

🎓🌞 Join the PhD course on Theoretical Foundations of Machine Learning at 🇮🇹 ELLIS Unit Genoa, June 23-27, 2025, hosted by Machine Learning Genoa Center. 🗓️ Apply by: March 16, 2025 📍 University of Genoa (in-person only) 💡 More info: malga.unige.it/education/sc...

1-day workshop on Bilevel optimization and hyperparameter tuning at ENS de Lyon, on March 25th : gdr-iasis.cnrs.fr/reunions/bil... Keynote talks by @jmairal.bsky.social @tonysf.bsky.social @samuelvaiter.com, Saverio Salzo and Luce Brotcorne, contributed talks are welcome!

Our paper "PnP-Flow: Plug-and-Play Image Restoration with Flow Matching" has been accepted to ICLR 2025. Here a short explainer: We want to restore images (i.e., solve inverse problems) using pretrained velocity fields from flow matching. However, using change of variables is super costly.

Today something crazy happened. POT has reached 1000 citations (total) 🤩🚀. Very proud to be part of a scientific community that acknowledges open source research software. Please continue to use, cite and contribute to POT ! Small🧵below for those interested pythonot.github.io

MLSS coming to Senegal ! 📍 AIMS Mbour, Senegal 📅 June 23 - July 4, 2025 An international summer school to explore, collaborate, and deepen your understanding of machine learning in a unique and welcoming environment. Details: mlss-senegal.github.io

every cool thing I've ever done happened because I had one-or-two people I could constantly annoy with messages like "ooh look what I found" "ooh look what I learned" "ooh look what I broke" and who did the same in return

Anne Gagneux, Ségolène Martin, @quentinbertrand.bsky.social Remi Emonet and I wrote a tutorial blog post on flow matching: dl.heeere.com/conditional-... with lots of illustrations and intuition! We got this idea after their cool work on improving Plug and Play with FM: arxiv.org/abs/2410.02423

Johnson-Lindenstrauss lemma in action: it is possible to embed any cloud of N points from R^d into R^k without distorting their respective distances too much, provided k is not too small (independently of d!) Better: any random Gaussian embedding works with high proba!

This year, there are 16 positions at CNRS in computer science (8 in "applied" domains → ask me - 8 on "fundamental" domains → ask the other David). @mathurinmassias.bsky.social has a good list of advice mathurinm.github.io/cnrs_inria_a... Official 🔗 www.ins2i.cnrs.fr/en/cnrsinfo/... Don't wait!

Conditioning of a function = ratio between highest and smallest eigenvalues of its Hessian. Higher conditioning => harder to minimize the function Gradient Descent gets faster on function with decreasing conditioning L/mu 👇

If you have a PhD and you're interested in doing a postdoc near paris on nonsmooth frank Wolfe methods for machine learning, apply for an fmjh post doc here www.fondation-hadamard.fr/fr/programme... deadline is dec 9!

New blog post: the Hutchinson trace estimator, or how to evaluate divergence/Jacobian trace cheaply. Fundamental for Continuous Normalizing Flows mathurinm.github.io/hutchinson/

Time to unearth posts from the previous network! 1°: Two equivalent views on PCA: maximize the variance of the projected data, or minimize the reconstruction error