Predictive coding has been one of the rare theories in neuroscience with bold, testable predictions at circuit level, and itβs been under scrutiny for years.
Itβs exciting to see recent experiments pushing it to its limits, hopefully leading to new directions.
π§ π
Itβs exciting to see recent experiments pushing it to its limits, hopefully leading to new directions.
π§ π
Reposted from
Blake Richards
This paper may be very important:
www.biorxiv.org/content/10.1...
tl;dr: if you repeatedly give an animal a stimulus sequence XXXY, then throw in the occasional XXXX, there are large responses to the Y in XXXY, but not to the final X in XXXX, even though that's statistically "unexpected".
π§ π π§ͺ
www.biorxiv.org/content/10.1...
tl;dr: if you repeatedly give an animal a stimulus sequence XXXY, then throw in the occasional XXXX, there are large responses to the Y in XXXY, but not to the final X in XXXX, even though that's statistically "unexpected".
π§ π π§ͺ
Comments
Although complexity emerges with depth, I expected the global predictions to influence lower areas via feedback, allowing lower representations to be contextualized by higher-level abstractions.
I don't think it's possible to get much more complexity than that in such shallow layers.