danfeuerriegel.bsky.social
ARC DECRA Fellow. Head of the Prediction and Decision-Making Lab at the University of Melbourne, Australia. Decision-making, predictive brains, neural adaptation, computational neuroscience, EEG, machine learning. He/him
55 posts
3,249 followers
576 following
Prolific Poster
comment in response to
post
These studies were motivated by the realisation that, in the visual system, we don't actually have solid evidence for expectation-related effects such as expectation suppression.
Reviewed here: doi.org/10.1016/j.ne...
comment in response to
post
This builds on (and replicates) our prior study that presented face stimuli and also did not identify predictive cueing effects.
doi.org/10.1016/j.ne...
Our findings are relevant to the evidence base we use to build and develop predictive processing models.
comment in response to
post
Wonderful to see this published!! Huge congrats and hope things are well in the States
comment in response to
post
(Not that it exists, but that it could be localised)
comment in response to
post
Thanks heaps! And theta activity in the hippocamus!
comment in response to
post
Perhaps another way of asking the question is:
What parts of the code/analyses are important to think deeply about?
And what parts are not, and could potentially be automated and unit tested?
comment in response to
post
This point is extra important re: training people to link their data and analysis methods with the conceptual research questions of their project.
Writing (and coding) is thinking. Creating analysis code builds a type of expertise that is hard to get just by reading others' papers or code.
comment in response to
post
Usually the process of writing code makes you realize you don't know the solution in enough detail. If AI fills in the blanks my making assumptions that you're not sufficiently aware of that could be dangerous. So seems like having the experience of having written code yourself would help