1/ Okay, one thing that has been revealed to me from the replies to this is that many people don't know (or refuse to recognize) the following fact:
The unts in ANN are actually not a terrible approximation of how real neurons work!
A tiny 🧵.
🧠📈 #NeuroAI #MLSky
The unts in ANN are actually not a terrible approximation of how real neurons work!
A tiny 🧵.
🧠📈 #NeuroAI #MLSky
Reposted from
Blake Richards
Why does anyone have any issue with this?
I've seen people suggesting it's problematic, that neuroscientists won't like it, and so on.
But, I literally don't see why this is problematic...
I've seen people suggesting it's problematic, that neuroscientists won't like it, and so on.
But, I literally don't see why this is problematic...
Comments
https://www.nature.com/articles/s42003-021-02437-y
And that's why it's important to clarify that it's an *approximation*, one adopted because it makes more math possible.
(Can we approximate ANN by brain?).
https://neurotext.library.stonybrook.edu/C3/C3_3/C3_3.html#:~:text=The%20simplest%20equivalent%20circuit%20for,single%20capacitor%20(Cm).
And the derivative of the membrane potential is a linear function of the current at *every* time-step.
I think people confuse the specific way ANNs tend to be formulated in ML applications with ANNs as a general modeling approach?
And this review includes leaky units (Eqn 26), as well as another way to E/I (Eqn 35): https://pmc.ncbi.nlm.nih.gov/articles/PMC11576090/
but do u know of ANN models that don't assume the neurons are all identical, but add a simple diversity/randomness factor?
i was thinking of things like this - https://symposium.cshlp.org/content/83/45.full.pdf?utm_source=chatgpt.com
Excuse me while I engage in a bit of self-promotion 😅:
https://www.biorxiv.org/content/10.1101/2024.08.07.606541v1.abstract
But, I'm always confused by people who suggest rate-based units are "nothing like" real neurons, cause I think it's an obviously false statement.
https://www.biorxiv.org/content/10.1101/2024.12.17.628883v1
We explored how different neuronal properties across neuron types, cortical layers, and species translate into different functional complexities, using ANNs as I/O complexity measurements.
More here: https://medium.com/@virati/failure-modes-and-models-3d84020982dd
Beniaguev, D., Segev, I., & London, M. (2021). Single cortical neurons as deep artificial neural networks. Neuron, 109(17), 2727-2739.
🕯
🕯 🕯
🕯 🕯
🕯 @ilennaj.bsky.social 🕯
🕯 🕯
🕯 🕯
🕯
Yes, the brain is a giant neural network, but *not* at all in the way NeurIPS thinks of neural networks. The brain is not a giant ANN of generic fully connected layers.
how well the M-Pitts or DL abstraction has survived later models, as you argue. I would think, however, that its relevance for different neuroscientists depends on what they want to explain.
https://compneuro.neuromatch.io/tutorials/W2D3_BiologicalNeuronModels/student/W2D3_Tutorial1.html
https://direct.mit.edu/neco/article-abstract/29/12/3260/8316/Capturing-the-Dynamical-Repertoire-of-Single
https://www.cell.com/AJHG/fulltext/S0896-6273(03)00149-1
We should do both more detailed models and more abstract models. Each has their use!
https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2020.00033/full
Gating and sign inversion are powerful effects, surely they matter...
https://bsky.app/profile/tyrellturing.bsky.social/post/3ldh4plptb42a
For glutamatergic synapses, it probably doesn't matter much, because the reversal is very far from where the neuron usually is.
For GABAergic synapses, though, yes, it surely matters!
But, as the data I presented in the thread shows, these complexities are more about the precise behaviour of the neuron.
The broad, rate-based, IO function is reasonably well described using a linear integration step.
This is a very weak defense of the analogy. You're being convinced by the marketing term 'neural net'.
ANN nodes are also not very good approximations of real neurons.
2. It was never “marketing”. ANNs were originally invented as models of the brain, so they were named after the very thing they were trying to model.
PCA is also not a functional model for V1.