Profile avatar
irisvanrooij.bsky.social
Professor of Computational Cognitive Science | @AI_Radboud | @[email protected] on 🦣 | http://cognitionandintractability.com | she/they 🏳️‍🌈
923 posts 14,349 followers 846 following
Regular Contributor
Active Commenter
comment in response to post
Special thanks to @irisvanrooij.bsky.social, whose article on reclaiming AI for cogsci whose table gave me the idea for defining AI as a "history of practices reflecting different ideas of AI." link.springer.com/content/pdf/...
comment in response to post
See also (different angle) www.linkedin.com/posts/jdshad...
comment in response to post
I would not be interested reading work where the author uses generative AI to brainstorm, write and/or revise. Wrote a brief blog about why some time ago irisvanrooijcogsci.com/2022/12/29/a...
comment in response to post
Thank you for the feedback! So happy it is useful.
comment in response to post
Read again, it says “human(-like or -level) cognition”. I have engaged in good faith, but bowing out now. You can read or ignore, but no need to waste my time
comment in response to post
IF you want to use RL for solving the AI-by-Learning problem as formalised, then the same results apply. The claim about *all* algorithms is relative to the problem. It may help to read the paper in this contextualised way and follow along how we build the ideas, instead of pre-set expectations.
comment in response to post
Of course RL is an active research area and produces some useful applications. This is not what our paper is about, not what my response was about. You are interested in other things than what this paper is about. That is valid. You can ignore our paper.
comment in response to post
RL is already known to be intractable. Second point is not true, see further down the thread (and of course in the paper) bsky.app/profile/iris...
comment in response to post
bsky.app/profile/smit...
comment in response to post
Oof. The bits underlined in red are best understood to be accompanied by hand-waving, eye-rolling and a tone of "yeah, sure buddy, here's your magical flying pig, we'll say that exists". bsky.app/profile/iris... /3
comment in response to post
Just going to leave this here
comment in response to post
Thanks, nice thread :)
comment in response to post
"Given that we don't have infinite time, money, data, or flying pigs, is it reasonable to believe that LLMs can replicate human thought at even the very most basic level? No." (Again. The durr is left out by convention.) bsky.app/profile/iris... /7
comment in response to post
"It's so hard to do that, again even with the easiest possible bar to meet, we're invoking numbers of the atoms in the universe to explain how much it would cost." (Scientific decorum dictates that the "ya bozos" is left unsaid, merely implied.) bsky.app/profile/iris... /6
comment in response to post
"It's so impossible for this to succeed that we'd prove it also possible to invent anti-gravity, a cake unbaker, and IDK an ice hockey rink in hell." "NP hard" means "so hard that there's a special language for how totally unsolvable in the lifespan of the universe" bsky.app/profile/iris... /6
comment in response to post
"Seriously, we don't expect our hypothetical engineer with her magical flying pigs and absolute best case conditions and just like the LOWEST bar that we can think of to succeed clearly, just like sort of close." bsky.app/profile/iris... /5
comment in response to post
"We're not even going to ask for a good approximation, literally anything even close to good will do. Just very, very roughly can be fine." bsky.app/profile/iris... /4
comment in response to post
This posts use of "magically and at no cost" is so scathing as to be almost plain text. "Non-neglible probability" means ~ "noticeably better than chance, as in the AI gets things right often enough for it to be not just lucky coin flips." bsky.app/profile/iris... /2
comment in response to post
There might even be reasons to think it cannot get much better (or at least not easily) : bsky.app/profile/iris... (note I have not looked into this research, so it could be saying something else, sorry if I misused your post then Iris).
comment in response to post
Our proof grants that cognition is computationally tractable itself, and also allows for (very lenient criteria of) approximation. See below in thread: bsky.app/profile/iris...
comment in response to post
bsky.app/profile/iris...
comment in response to post
Thanks 🎉
comment in response to post
The intractability proof (a.k.a. Ingenia theorem) implies that any attempts to scale up AI-by-Learning to situations of real-world, human-level complexity will consume an astronomical amount of resources (see Box 1 for an explanation). 13/n
comment in response to post
Oh sorry, could be a Dutchism too