hadivafaii.bsky.social
Postdoc at UC Berkeley, Redwood Center | ๐ง ๐ค | ๐น | ๐พ | https://mysterioustune.com/
76 posts
687 followers
141 following
Getting Started
Active Commenter
comment in response to
post
Thanks for the clarification!
"sufficiently well defined"
Too much overfitting to toy problems (i.e., physics)? I doubt this is even possible in the most important scientific challenge of our time: understanding complex systems
comment in response to
post
Thanks for sharing. I am also doing my best to spread the word ๐ (see top left)
From: openreview.net/forum?id=ekt...
comment in response to
post
What do you mean by opportunistic? Let me guess:
Science is not about truth-seeking. Rather, it's about optimizing some utility function, which may or may not contain elements of truth-seeking, depending on the context and which part of science we're discussing.
comment in response to
post
That's all, folks. I'll be back with Part 2 (and beyond) once they're up.
Thanks for reading this far. I hope it sparked your curiosity.
(here's the link again: mysterioustune.com/2025/01/13/w...)
P.S. If you enjoyed this thread, I'd love to hear your thoughts.
๐งต[16/n]
๐ง ๐ค๐ง ๐ #AI
comment in response to
post
I end with a call to action:
We are each gifted a finite amount of time on this earth.
How will you spend yours? Chasing fleeting distractions? Or contributing to humanity's deepest, longest ongoing questโminimizing our collective KL divergence?
Your choice.
๐งต[15/n]
๐ง ๐ค๐ง ๐ #AI
comment in response to
post
In the season finale (Part X), I will go all-in on speculation: p_brain is the all-encompassing object, swallowing everything. Even physics.
In that sense, all of science and philosophy merge into a single pursuit:
โก๏ธ Humanity's collective KL minimization.
๐งต[14/n]
comment in response to
post
But hold on a second. Weโre not done yet.
It turns out, I need 10 more parts to finish the full story arc. Hereโs what I have in mind next:
๐งต[13/n]
comment in response to
post
This "blog post" ended up being 20+ pages. I couldn't break it into smaller parts.
Every section here contributes to a single cohesive message:
โ
brains adapt = minimize KL
Curious about any of the details? Find the full PDF here: mysterioustune.com/2025/01/13/w...
๐งต[12/n]
comment in response to
post
Here's my signature "๐ด๐๐๐/๐ฌ๐๐๐๐๐๐ ๐ช๐๐๐๐๐๐๐๐๐
๐๐๐๐" table, summarizing the mathematical results, alongside their English translation:
๐งต[11/n]
comment in response to
post
Recap so far:
โ
Brains: adapt to survive, survive to adapt
โ
adaptation = KL minimization
โ
most of ML = KL minimization
โ
KL asymmetry captures the flow of information
โ
KL minimization = flow of information from the world into your brain, updating your beliefs
๐งต[10/n]
comment in response to
post
This Treasure Hunt example sheds light on the directional nature of KL:
โ
KL divergence must be asymmetric: because it captures the flow of information from the world to the brain (when you read a book or perform an experiment)
๐งต[9/n]
comment in response to
post
In section 5, I explore the relative information interpretation of KL through an intuitive "Treasure Hunt" story.
I then connect this to philosophy of science, and experiments as "truth-revealing actions."
(another relevant @alexalemi.bsky.social piece: blog.alexalemi.com/kl.html)
๐งต[8/n]
comment in response to
post
So, adaptation and KL minimization are intimately related. But what about learning?
โ
In section 4, I apply these foundational principles to "derive" (almost) all of machine learning from KL minimization.
(first pointed out by @alexalemi.bsky.social : blog.alexalemi.com/kl-is-all-yo...)
๐งต[7/n]
comment in response to
post
Importantly, KL divergence naturally emerged from the mathematics of likelihood evaluation for an observer living in a world.
I didn't put it in by hand, or make crazy assumptions.
โ
There is something truly fundamental, unique, and privileged about KL divergence.
๐งต[6/n]
comment in response to
post
Here's our first deep insight. The following are equivalent:
you adapt to the world
โ๏ธ
you accurately predict the world
โ๏ธ
fewer nasty surprises that could get you killed
โ๏ธ
KL ( world || your beliefs ) drops
๐งต[5/n]
comment in response to
post
A few simple logical steps yield our main theoretical result๐
This magical equation connects two seemingly distinct concepts:
1โฃ P_obs: the subjective probability your brain assigns to its observations
2โฃ KL divergence between the true world and your internal beliefs
๐งต[4/n]
comment in response to
post
This "blog post" ended up being 20+ pages. So I just posted the intro, conclusions, and a link to a PDF with full math derivations.
Here's the table of contents to give you a sense of what's there๐
โฉ Let's continue with summarizing the most important results.
๐งต[3/n]
comment in response to
post
This post started with a simple observation:
โ
Adaptation is so central to all biological intelligence, that it may very well define it.
But can we mathematize this fuzzy English termโ
๐งต[2/n]
๐ง ๐ค๐ง ๐ #AI
comment in response to
post
โก๏ธ The most important concept in modern science does not exist [in an objective sense].
In science, we're all studying subjective states of belief, given constraints and reasonable assumptions.
The real question is: ๐๐ค๐ฌ ๐๐ง๐ ๐ฉ๐๐ค๐จ๐ ๐จ๐ช๐๐๐๐๐ฉ๐๐ซ๐ ๐ฅ๐ง๐ค๐๐๐๐๐ก๐๐ฉ๐๐๐จ ๐๐ข๐ฅ๐ก๐๐ข๐๐ฃ๐ฉ๐๐ ๐๐ฎ ๐ฃ๐๐ฉ๐ฌ๐ค๐ง๐ ๐จ ๐ค๐ ๐ฃ๐๐ช๐ง๐ค๐ฃ๐จ?
๐ง ๐ค๐ง ๐
comment in response to
post
You framed this as Helmholtzian inference versus RL, but there is a line of work that casts RL as inference (see review by Sergey Levine: arxiv.org/abs/1805.00909).
So my question is: given this connection, can we interpret your results under the unifying framework of RL as inference?
comment in response to
post
Poster info:
๐
Thursday, Dec 12
โฒ๏ธ 4:30 โ 7:30 PM PST
๐ East Exhibit Hall A-C, #3709
Unfortunately, I can't attend in person due to visa issues, but I'll join virtually via an iPad (special thanks to my awesome collaborator Dekel for agreeing to set this up!)
Hope to see some of you there!
๐ง ๐ค๐ง ๐
comment in response to
post
Links:
Paper: arxiv.org/abs/2405.14473
Code: github.com/hadivafaii/P...
Poster: drive.google.com/file/d/1ZyMJ...
X thread summary: x.com/hadivafaii/s...
๐ง ๐ค๐ง ๐
comment in response to
post
Poster info:
๐
Thursday, Dec 12
โฒ๏ธ 4:30 โ 7:30 PM PST
๐ East Exhibit Hall A-C, #3709
Unfortunately, I can't attend in person due to visa issues, but I'll join virtually via an iPad (special thanks to my awesome collaborator Dekel for agreeing to set this up!)
Hope to see some of you there!
๐ง ๐ค๐ง ๐
comment in response to
post
Finally, check out the paper and code:
๐ Paper: arxiv.org/abs/2405.14473
๐ผ๏ธ Poster: drive.google.com/file/d/1ZyMJ...
๐ป Code: github.com/hadivafaii/P...
comment in response to
post
Here is the poster information:
๐
Thursday, Dec 12
โฒ๏ธ 4:30 โ 7:30 PM PST
๐ East Exhibit Hall A-C, #3709
Unfortunately, I won't be there due to visa issues, but I will join virtually through an iPad (special thanks to my awesome collaborator Dekel who agreed to set up my virtual attendance!).
comment in response to
post
This X thread summarizes the motivations behind our work and describes key findings:
x.com/hadivafaii/s...
comment in response to
post
Hi! I work on visual perception. Could I be added?
comment in response to
post
Could I be added as well? Here are two papers, pure theory, no data!
openreview.net/forum?id=ekt...
arxiv.org/abs/2410.19315
comment in response to
post
I'm currently working on the first long post. It's about "World Models, Adaptation, and Surprise."
Stay tuned!
Here's the link again: mysterioustune.com
comment in response to
post
On the About page, I explain the context behind the blog's name:
mysterioustune.com/about/
comment in response to
post
I'm currently working on the first post. It's going to be about "World Models, Adaptation, and Surprise."
Stay tuned!
Here's the link again: mysterioustune.com
comment in response to
post
The About page provides context about the blog's name:
mysterioustune.com/about/
comment in response to
post
Hi Raymond, Iโm in the Bay Area and Iโd be down to chat over coffee! When will you be here?
comment in response to
post
- before accurate measurements: ๐ฌ๐ธ๐ฐ๐ท๐ฒ๐ฝ๐ฒ๐ธ๐ท
- after accurate measurements: breathing, eye movement, etc
comment in response to
post
But also... "The Oscillator Brain"!
Not far off actually haha
comment in response to
post
"The Involute Brain"
comment in response to
post
I'd love to be added. Thanks!
comment in response to
post
In your view, are there fundamental differences between stochastic optimal control and reinforcement learning?
This review paper says they can be unified: arxiv.org/abs/1912.03513
I'm asking because I'm interested in modeling active vision and looking for an appropriate mathematical framework.
comment in response to
post
@neuralreckoning.bsky.social could you add me as well? I don't do SNNs in the conventional sense, but I like spikes :)
Here are some relevant works:
openreview.net/forum?id=ekt...
arxiv.org/abs/2410.19315
comment in response to
post
Hey there. I'm new to spiking neural nets and look forward to learning more from this community!
comment in response to
post
AI for neuroscience ๐โโ๏ธ
comment in response to
post
Can you add me as well? Here's a recent relevant work:
openreview.net/forum?id=ekt...
comment in response to
post
Does this resonate with your perspective, @dickretired.bsky.social?
[5/5]