sparuniisc.bsky.social
Neuroscientist at IISc Bangalore studying visual perception using 🐒🚶🏽♀️💻
https://sites.google.com/site/visionlabiisc/
67 posts
511 followers
222 following
Regular Contributor
Active Commenter
comment in response to
post
Maybe so, if all edges are detected nicely. Computer vision based edge detection doesn't look like this.
comment in response to
post
We are grateful to the (late) Carl Olson, @nancykanwisher.bsky.social and many others for their critical and constructive inputs, to the awesome India Alliance for the blue-sky funding and to CNS & IISc for continuing to be an awesome place to work! And we're done! Thanks for making it this far! n/n
comment in response to
post
(started in Aug 2022, got reviewed by several high profile journals, got kicked out each time due to 1 of 3 dissenting reviewers who gave single-paragraph reviews, etc., and finally ended up with eLife where the reviewers still gave us a hard time but we are done) 29/n
comment in response to
post
For now we are content with showing a really cool set of results, on which we have worked hard over several years, and shepherded through an unusually long review process 28/n
comment in response to
post
We think this computation could predict all kinds of visual behaviours – such as visual search asymmetry, aesthetic appeal, perceived complexity and image memorability. All this needs to be investigated thoroughly before we know what this all means. 27/n
comment in response to
post
To sum up, we think we’ve found experimental evidence that there’s a novel computation called visual homogeneity, being performed in a localized region in the brain, which is being used to solve property-based visual tasks. 26/n
comment in response to
post
If visual homogeneity is a really universal computation, then it should also work for a symmetry task. Here too, we obtained exactly similar results - the visual homogeneity computation predicted symmetry responses, and the same region showed proportional activations!! 25/n
comment in response to
post
This region is just anterior to the lateral occipital complex, where neural dissimilarity between images matched perceived dissimilarity 24/n
comment in response to
post
In the brain, we found a localized region whose brain activity is directly proportional to visual homogeneity. 23/n
comment in response to
post
So armed with these predictions, we collected and analyzed our data.....and lo and behold! We found that indeed, exactly as we predicted, we can find a center in perceptual space relative to which distance computations do predict oddball present/absent search 22/n
comment in response to
post
By contrast, regions that encode task difficulty will not show this pattern, because task difficulty is largest in the middle range of VH and not the extremes 21/n
comment in response to
post
In brain activity, if there's a single brain region (region VH) that encodes this quantity, the response in this region should be directly proportional to visual homogeneity. 20/n
comment in response to
post
If such a computation was actually being used by the brain, what do we expect? Well, first of all, it should act like a decision variable, which means that any stimulus close to the decision boundary will be hard to decide, and will have long response times. 19/n
comment in response to
post
Because this is a very specific computation that involves calculating the distance of each visual stimulus to some hypothetical center in neural space, we called this quantity/computation as "visual homogeneity". 18/n
comment in response to
post
The same idea would work for symmetry tasks: because the symmetric object has the same visual features repeated, it will "stand apart" compared to asymmetric or visually heterogeneous arrays. Thus we could "solve" a symmetry task by computing the distance to some center 17/n
comment in response to
post
What this means is that visually homogeneous arrays would automatically create a neural response that "stands apart" compared to visually heterogeneous arrays. Thus we could "solve" the oddball search task by simply computing the distance of each image to some center 16/n
comment in response to
post
So, when you see an array containing identical items the neural response is equal to the single item response (well, almost, to an approximation). But when you see an array containing an oddball, its representation is somewhere between the neural response to the two items. 15/n
comment in response to
post
Well to test it, first of all, we need to explain how we get the response to a multi-item array from the response to a single item. Luckily, we know the answer: the neural response is the average of the single item responses, as shown by us and others 14/n
comment in response to
post
Well in science, just having a cool idea isn't enough, you got to show that it works. And as Feynman says, if it doesn't work, it doesn't matter how famous or rich you are, its just wrong. Its a heartbreaking but beautiful aspect of doing science. 13/n
www.youtube.com/watch?v=p2xh...
comment in response to
post
We thought maybe we use the same underlying computation to solve all these three tasks, despite the fact that our verbal description for them is entirely different! Cool no? 12/n
comment in response to
post
In a same-different task, the "same" display is visually homogeneous. In an oddball search task, the "target-absent" array is visually homogeneous. In a symmetry task, the symmetric object is visually homogeneous. 11/n
comment in response to
post
We realized that the common property that across all these property-based tasks (same-different, oddball search and symmetry) is that you are trying to discriminate a visually homogeneous image from a visually heterogeneous image 10/n
comment in response to
post
Our key insight was based on our earlier work where we showed that symmetric objects become special due to simple computations in neurons, and that perceiving symmetry does not require "symmetry detectors" as many have proposed 9/n
comment in response to
post
What's the feature space used by the brain to solve these tasks? What's the decision variable? You cannot solve these tasks by looking for any particular feature in feature space, because an object can have any set of features and be symmetric or asymmetric. 8/n
comment in response to
post
But there's an entire other category of visual tasks that do not fall into this framework. These tasks involve searching for a particular property, like determining if two images are same or different, deciding if there's an odd one out, and even deciding symmetry! 7/n
comment in response to
post
You see, most visual tasks are feature-based. If you are searching for the handsome Georgin in a crowd while ignoring other ugly distractors, you'd have to train a classifier, and project any image to get a decision variable. This is the standard model for decision making 6/n
comment in response to
post
If there's such systematic variation, there's some intrinsic image property that makes it a "good" or "bad" distractor. What could this be? For a while we puzzled over this question, but we quickly noticed an even more fundamental question. How do we even solve this task?! 5/n
comment in response to
post
Don't believe it? Below are some example arrays in which you should confirm if there is or isn't an oddball. You might find that its easy to confirm that there's no target in the rope array (mean RT = 1.06 s) than in the leaf array (mean RT = 1.45 s). Why?!! 4/n
comment in response to
post
Now this posed a puzzle - when there is an oddball target among distractors, its well known that its easy to find if it is dissimilar to the distractors. But in target-absent searches, there is no target, so why would you take more or less time to "not-find" a target? 3/n
comment in response to
post
The seeds for this study was planted by a curious observation made by Sricharan Sunder (@sricharan92) in our cool target preview study back in 2016. We saw that people took systematically longer or shorter to say that a target was absent in a search array. 2/n
jov.arvojournals.org/article.aspx...
comment in response to
post
Cool work!! It might also help to know if the illusion arises in deep networks trained for recognition?
comment in response to
post
But a perfect storm like this can be really discouraging to a student for whom it is one of their first papers. For the sake of science and our scientists, I sincerely hope this is an outlier in the review process!
And....we're done! Thanks for reading this far!
comment in response to
post
To be clear, the editors involved have been supportive but were helpless with many reviewers refusing to accept invitations, delaying submission of their reviews despite repeated reminders, refusing to review a second time - all of which is understandable in isolation
comment in response to
post
That's a whopping 3 years from entering the peer review process to final publication. This maybe okay for a tenured faculty like me, but NOT okay for our students for whom publications are widely seen as critical to success.
comment in response to
post
THEN got reviewed in this journal for 5 months, then went to two rounds of revision and finally got it accepted in July 2024, and published online in Oct 2024.
comment in response to
post
But equally, we had a terrible time during peer review. We started submitting to journals in July 2021. We got reviewed for 6 months and got rejected in one journal, got reviewed for 8 months in another journal, got invited to revise, resubmitted and got rejected in a month
comment in response to
post
And now for some back story. This was our pandemic study, conducted fully online by Thomas who worked hard to adapt our desktop PC based paradigms to the online platform Pavlovia, benchmark and run it fully online in spite of the shutdowns, panic and infections at the time.
comment in response to
post
We are deeply grateful to the funding support for this study from the awesome India Alliance to our lab and ICMR for the fellowship that supported Thomas during his PhD.