seanescola.bsky.social
Came for the neuro; stayed for the AI
64 posts
1,422 followers
409 following
Regular Contributor
Active Commenter
comment in response to
post
Very cool work, David! Congrats!
comment in response to
post
Amazing!
comment in response to
post
Congrats on the move!
comment in response to
post
I don't think traditional systems neuroscience fits my definition, as it has rarely concerned itself with general principles of intelligence that apply to non-biological systems.
comment in response to
post
The article describes a “privately funded and self-sustaining entity” and specifically eschews reliance on public funds
comment in response to
post
I believe the proposal is not to fund one idea, but rather build an incubator for many ideas that, after the initial investment, pays for itself sustainably in perpetuity. It also addresses the gap in funding dedicated to intelligence science per se. It’s a problem that #NeuroAI grants go to the NIH
comment in response to
post
My first thought!
comment in response to
post
My take away: if the scale era is over, maybe we get to be smart again!
comment in response to
post
I’m no expert in the history of cog sci, but from listening to @tyrellturing.bsky.social, it sounds like there was an active disregard for neuro in the early days and an embracing of it later but at the expense of non-brain intelligence. So brain+machine bicuriousity hasn’t yet been realized
comment in response to
post
I disagree. Computational and systems neuroscience has historically been interested in how brains are intelligent, not in intelligence generally independent of substrate (although perhaps the Caltech program differed from the field as a whole)
comment in response to
post
For completeness, I’ve typically defined NeuroAI as the discipline that uses ideas from neuroscience (architectural, representational, behavioral, dynamic, learning, etc.) to build AI that is “better” (more accurate, able to generalize, robust, data efficient, energy efficient, etc.)
comment in response to
post
🙋
comment in response to
post
I just checked out the supp figs. Maybe I’m reading it wrong, but it looks like the images were corrupted without reference to the model, right? As opposed to doing gradient descent in pixel space to find the minimal corruption to fool the model which is what I believe Patrick did to his chihuahua.
comment in response to
post
I mean finding the minimum perturbation to an image needed to defeat the model. But instead of defeating an object recognition model, you find the perturbation to defeat keypoint extraction. I’m guessing this would be a pretty straightforward rotation project with potentially interesting results
comment in response to
post
Cool! Have you ever tried adversarial examples for key point extraction? It would be interesting to know if they are more perceptually salient to humans than adversarials for object recognition. If so, it might suggest that biological vision works more like your models: keypoints then objects
comment in response to
post
One hopes the dot product of those two vectors is positive
comment in response to
post
We sort of did this as a field last year from 240 teams of 20 neuroscientists per team applied for 10 year @simonsfoundation.org grants. It would be interesting for Simons to publish what they learned about what people want to do. Is this planned? @lyssa12.bsky.social @kelseycmartin.bsky.social
comment in response to
post
We have a Roborock. My son calls it Walter. It’s his best friend. I have failed as a parent but the floors are clean
comment in response to
post
Thanks! And thanks for starting this convo!
comment in response to
post
Here, the valence of mood allows for initial learning such the later learning can be contextualized by mood
comment in response to
post
Yeah, like this. I think of it as part of the larger idea that mammals evolved the ability to simulate the world so that they can learn from simulations (as described in Max Bennett’s wonderful book A Brief History of Intelligence).
comment in response to
post
This might be a bootstrapped phenomena, btw. Initially the good v bad binary was necessary for RL in simple bilaterians. It’s possible that the subjective “goodness” or “badness” of mood in the era of dynamic objectives is vestigial. Hot take: are mood d/o’s the mental equivalents of appendicitis?!?
comment in response to
post
Yeah, agreed. The multidimensionality of mood as mentioned in other branches of this thread means that optimizing mood can’t be the sole goal. More useful to think of mood as driving dynamics in the objective function imo
comment in response to
post
I don’t have a sense of why moods FEEL good and bad though, though I’m not sure this is a question that can be studied using the tools of science
comment in response to
post
My intuition is simply that mood states allow for context-dependent decision-making and learning in the same way that other states (hunger, tiredness, etc.) do. That is, moods are part of how we have dynamic objective functions (one key way in which biological and artificial intelligences differ)
comment in response to
post
🙋
comment in response to
post
Agree with this. It’s useful to describe a field when one is able to articulate a goal or set of goals that sets it apart from other fields. I would claim that #NeuroAI hits this mark. But clearly the field draws from many communities
comment in response to
post
And I think the labels “computational neuroscience” or “theoretical neuroscience” make it harder for us to get that seat
comment in response to
post
…we have something to say! I assert with great confidence that AI will advance massively faster when informed by natural intelligence than otherwise. The semantic argument is secondary to the goal of building an intelligence science ecosystem where neuro has a seat at the table
comment in response to
post
If the NeuroAI community doesn’t explicitly declare itself as a discipline distinct from understanding the brain, there is a real risk that we will be ignored by folks outside of neuro. And I’m not worried about being ignored because it will hurt my feelings (though it will 😂) but rather because…
comment in response to
post
At NAISys, for example, I would say the balance of comp/sys neuro folks to others was about 3:1, so you’re right. But the hope is that we can generate interest for other fields.
comment in response to
post
To @nicolecrust.bsky.social‘s point that LLMs are miles away from having the EQ needed to compete with human therapists, there are interesting questions of what kinds of paired data could endow models with better EQ including neural data, facial expression, and non-semantic speech features
comment in response to
post
I should add that I say this as both a psychodynamically trained psychiatrist and a NeuroAI researcher
comment in response to
post
A huge part of a psychodynamic therapist’s training is to recognize what’s coming from the therapist’s own life (e.g., is their feeling of anxiety a projection from the patient or a consequence of their conflicts at home/work?). AI therapists won’t need to figure this out
comment in response to
post
The upside of an AI therapist is that the countertransference (the therapist’s feelings about the patient) – a key tool for a therapist in understanding a patient – won’t be clouded by what’s going on in the therapist’s own life.
comment in response to
post
I see the bigger risk being cultural rather than procedural. I.e., patients may reject an AI therapist when they know the therapist is AI. If the EQ gap is closed, it will be interesting to see if blinded studies reveal patient preference
comment in response to
post
To the larger point: it’s an interesting question whether or not the emotional intelligence gaps of current models can be closed but, if they are closed, I’m of the opinion that AI therapists could potentially outperform human therapists in certain (perhaps eventually all) settings
comment in response to
post
A bunch of work in the psychoanalytic community during/after Covid has proposed that the Zoom window works a bit like the couch: it creates some distance from the therapist that facilitates patient free-association
comment in response to
post
I agree that we don’t want science to be insular! Curious why you see this as a risk. Another outcome would be that AI peeps become neuro-curious. That would be a win!
comment in response to
post
Thus, it’s useful to elevate #NeuroAI to the status of a field for the pragmatic reason of encouraging major stakeholders to help build this critical infrastructure. Spoiler alert: @davidamarkowitz.bsky.social will be having a lot more to say about this soon