mathurinmassias.bsky.social
Tenured Researcher @INRIA, Ockham team. Teacher @Polytechnique
and @ENSdeLyon
Machine Learning, Python and Optimization
14 posts
552 followers
81 following
Regular Contributor
comment in response to
post
Thanks for the kind words Sander!
comment in response to
post
I didn't know the paper, will read asap and we'll edit our blog post accordingly!
comment in response to
post
The optimization loss in FM is easy to evaluate, and does not require integration like in CNF. The whole process is smooth!
The illustrations are much nicer in the blog post, go read it !
👉👉 dl.heeere.com/conditional-... 👈👈
comment in response to
post
FM learns a vector field u, pushing the base distrib to the target through an ODE.
The key to learn it is to introduce a conditioning random variable, breaking the pb into smaller ones that have closed form solutions.
Here's the magic: the small problems can be used to solve the original one!
comment in response to
post
FM is a technique to train continuous normalizing flows (CNF) that progressively transform a simple base distrib to the target one
2 benefits:
- no need to compute likelihoods nor solve ODE in training
- makes the problem better posed by defining a *unique sequence of densities* from base to target
comment in response to
post
Thanks, it is for example the introductory example that I use in section 1.2 of my MSc lecture notes : mathurinm.github.io/assets/2022_...
comment in response to
post
I'm in!