Profile avatar
bellecguill.bsky.social
AI, Neuroscience and Music
52 posts 939 followers 230 following
Regular Contributor
Active Commenter
comment in response to post
And wait! The drama continues in the comment section: youtube.com/watch?v=70vY... She responded! Or maybe she did not, because it was "deleted". Or maybe it was not deleted because it was copied and some people still see it. Or maybe it's all a lie... 😱 Omg, This is better than netflix
comment in response to post
Very timely to think about this though. My conclusion for today: never give up scientific integrity ! But don't be an integrity police maniac. The most useful reaction/response/review is probably calm and targeted.
comment in response to post
- Sabine going wild on academic integrity youtu.be/shFUDPqVmTg - Why she also lacks scientific integrity youtu.be/70vYj1KPyT4
comment in response to post
No I did not try
comment in response to post
Which version? Here is the output of Claude Sonnet 3.5: claude.site/artifacts/d9... I did not play much with the prompt.
comment in response to post
Without saying it, what we compute mathematically is either - p(A,B) passive recording, - p(A | do(B=0)) with opto inactivation of area B only the later enables to see the diff between the recurrent versus feedforward hypotheses H1 and 2 more in: www.biorxiv.org/content/10.1...
comment in response to post
Strongly related to great blog post of @kordinglab.bsky.social on causality
comment in response to post
So I fully agree with the blog of @kordinglab.bsky.social , and it's just a matter of time before important papers analyze fruitfully opto-genetics data with causality theories. We tried to contribute a bit: www.biorxiv.org/content/10.1...
comment in response to post
Still not making "causal claims"... But opto-genetics stimulation is changing the picture significantly though. In math, the do-operator is used to model causality and set some variables to a value. Opto-genetics inactivation sets the activity to 0 in the middle of a network trajectory.
comment in response to post
I don't think we truly know what the investment was though. For sure it shows the grey market regulation for GPUs in China is not working. + @timkellogg.me wrote they use Huawei chips for inference (dunno where the info comes from). That's big if deepseek become the best and cheapest LLM api
comment in response to post
It's hard, but I think it's what many modelers are trying to do. But the conclusion is sometimes subjective or speculative, until it accumulates evidence for it. I think "perturbations testing" emerges as a good way to validate mechanisms inside data-constrained models.
comment in response to post
It also inspired me, I think we go further. Can the "plane model" predict a motor disruption? wind perturbation? The wooden model is useless. But the paper plane needs an upgrade to make a prediction. An ML model needs a mechanistic upgrade (or data, it's another topic) to predict perturbations
comment in response to post
Sorry for the typo, he is called Vahid Esmaeili. Thank you for the beautiful dataset (electrode array recordings + opto-inactivation). www.cell.com/neuron/fullt...
comment in response to post
This is an early pre-print, every feedback, question, or reaction is welcome. Feel free to ping me if you are interested to chat about this 🧠💙🧪 #Neuroskyence
comment in response to post
Congrats to @christossourmpis.bsky.social, who finishes his PhD with this. He worked hard and I think he is brilliant 🏆 Many thanks to Wulfram Gerstner and Carl Petersen for the guidance and the support. Thanks to Fahid Esmailli for collecting the in-vivo data. www.biorxiv.org/content/10.1... 8/8
comment in response to post
Opinion: Finally, mechanism modeling is not only about subjective "biological plausibility". Here, "Perturbation tests" provide a concrete evaluation to aim for. In 10y of research this is the first time that I see bio-mechanism modeling beating raw deep learning on hard prediction metrics. 7/8
comment in response to post
Again, call me crazy! We argue that a perturbation-robust RNN enable measurements of brain gradients. This is bc, mathematically, the effect of μ-perturbations is one taylor expansion away from RNN grads. So -- if RNN is robust -- grads of the RNN approx grads in the recorded circuit. Cool ! 6/8
comment in response to post
To speculate why perturbation-robust RNN will become important: We simulate a read-write opto experiment setup where a robust RNN is used to target optimal μ-perturbations and change simulated mouse behavior in real-time. (We also think it's a bit crazy... but it works in simulation) 5/8
comment in response to post
We tested this in-vivo with multi-area recordings in mice covering 6 areas from sensory to motor cortices. Our RNNs also predict jaw movements recorded with a camera. The results are consistent with the artificial data. Dale's law, local inhibition (and spikes) make the model more robust. 4/8
comment in response to post
We make RNN variants with added bio-features (e.g. Dale's law). Empirically, features that improve robustness the best are: - Dale's law: E/I weights are +/- - Local inhibition: I do not project to other areas Other features improve less: - Replacing σ with spikes - Sparsity prior 3/8
comment in response to post
We train RNNs to fit spike train recordings (in-vivo in mice or artifical data). RNN units are mapped 1-to-1 with brain cells, so we can simulate opto-activation of a cell-type in one area. Vanilla σRNN predict very well before perturbation, but their response after perturbation is very wrong. 2/8
comment in response to post
We make RNN variants with added bio-features (e.g. Dale's law). Empirically, features that improve robustness to perturbations are: - Dale's law: E/I weights are +/- - Local inhibition: I do not project to other areas Other features improve less: - Replacing σ with spikes - Sparsity prior 3/8
comment in response to post
We train RNNs to fit spike train recordings (in-vivo in mice or artifical data). RNN units are mapped 1-to-1 with brain cells, so we can simulate opto-activation of a cell-type in one area. Vanilla σRNN predict very well before perturbation, but their response after perturbation is very wrong. 2/8
comment in response to post
Thank you 🧠⚡
comment in response to post
Cool stuff! Can I be added?