Profile avatar
taylorwwebb.bsky.social
Studying cognition in humans and machines https://scholar.google.com/citations?user=WCmrJoQAAAAJ&hl=en
53 posts 1,156 followers 389 following
Regular Contributor
Active Commenter

PhD position in cognitive computational neuroscience! Join us, & investigate how we can endow domain-specific models of vision (eg DNNs) with domain-general processes such as metacognition or working memory. All details => www.kuleuven.be/personeel/jo... #PsychSciSky #Neuroscience #Neuroskyence

Why Tononi et al's defense of IIT fails to convince me. medium.com/@kording/86f...

I am happy to be a signatory to this updated critique of the integrated information theory (IIT) of consciousness. Despite much media attention, I agree that its 'core claims are untestable even in principle' and it is therefore unscientific. www.nature.com/articles/s41...

New version of "the letter" in Nature Neuroscience. Like many others in the field, I signed because I believe that IIT threatens to deligitimize the scientific study of consciousness: www.nature.com/articles/s41....

LLMs have shown impressive performance in some reasoning tasks, but what internal mechanisms do they use to solve these tasks? In a new preprint, we find evidence that abstract reasoning in LLMs depends on an emergent form of symbol processing arxiv.org/abs/2502.20332 (1/N)

I’m very excited to finally see this one in print! Led by the incomparable @brissend.bsky.social www.nature.com/articles/s41... We find that cognitive processes (e.g. attention, working memory) undergo error-based adaptation in a manner reminiscent of sensorimotor adaptation. Read on! (1/n)

1/13 New Paper!! We try to understand why some LMs self-improve their reasoning while others hit a wall. The key? Cognitive behaviors! Read our paper on how the right cognitive behaviors can make all the difference in a model's ability to improve with RL! 🧵

preprint updated - www.biorxiv.org/content/10.1... Each of us perceives the world differently. What may underlie such individual differences in perception? Here, we characterize the lateral prefrontal cortex's role in vision using computational models ... 1/ 🧠📈 🧠💻

vision+language people: Does anyone have a good sense why most of the recent sota VLMs now have a simple MLP as the mapping network between vision and LLM embeddings? Why does this work better, is learning more efficient? Slowly over time people dropped the more elaborate Q-Former/Perceiver arch

Key-value memory is an important concept in modern machine learning (e.g., transformers). Ila Fiete, Kazuki Irie, and I have written a paper showing how key-value memory provides a way of thinking about memory organization in the brain: arxiv.org/abs/2501.02950

Very nice analysis of the important role that visual perception plays in ARC problems. The ability of LLMs to solve these problems is dramatically affected by their size, even for identical problems. anokas.substack.com/p/llms-strug...

These are some truly incredible results. It is of course ridiculous to try and pretend like this model is really doing program synthesis (and thus previous claims about LLMs being an ‘off-ramp to intelligence’ are vindicated). A neural network has now matched average human performance on ARC.

🚨 New Paper! Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖 Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks! w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social 🧵👇

1/ Okay, one thing that has been revealed to me from the replies to this is that many people don't know (or refuse to recognize) the following fact: The unts in ANN are actually not a terrible approximation of how real neurons work! A tiny 🧵. 🧠📈 #NeuroAI #MLSky

In this new preprint @smfleming.bsky.social and I present a new theory of the functions and evolution of conscious vision. This is a big project: osf.io/preprints/ps.... We'd love to get your comments!

How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we look at the pretraining data and find evidence against this: Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢 🧵⬇️

Excited to announce that I'll be starting a lab at the University of Montreal (psychology) and Mila (Montreal Institute of Learning Algorithms) starting summer 2025. More info to come soon, but I'll be recruiting at the Masters and PhD levels. Please share / get in touch if you're interested!

Fascinating paper from Paul Smolensky et al illustrating how transformers can implement a form of compositional symbol processing, and arguing that an emergent form of this may account for in-context learning in LLMs: arxiv.org/abs/2410.17498

Very excited to share this work in which we use classic cognitive tasks to understand the limitations of vision language models. It turns out that many of the failures of VLMs can be explained as resulting from the classic 'binding problem' in cognitive science.

New preprint where @mmrobinson93.bsky.social and I jump into the literature on meta-cognition (hopefully in a useful way!): osf.io/preprints/os... We show that a simple memory model (TCC) can be straightforwardly adapted to make predictions about confidence #neuroscience #psychscisky

1/ Here's a critical problem that the #neuroai field is going to have to contend with: Increasingly, it looks like neural networks converge on the same representational structures - regardless of their specific losses and architectures - as long as they're big and trained on real world data. 🧠📈 🧪

Looking forward to digging into this!

i plan to move to Korea later this year, & will soon hire at all levels (students, postdocs, staff scientists, junior PIs) - docs.google.com/document/d/1... my lab in Japan will run till at least 2025. my job has been nothing but the dream job. but hopefully the above explains it. #neuroscience