There is no reason to humour anyone talking about AGI or give someone credit for doing so, it is not an "opinion" to "debate" as there is no actual proof it's possible. "Extrapolation" is another word for making shit up
Comments
Log in with your Bluesky account to leave a comment
Chatbots could get exponentionally better each year for ten thousand years and still never be conscious. AGI is a religious concept, not a techincal or scientific one.
Most definitions of AGI don’t require consciousness. And the Vatican seems to treat it as a technical concept:
> 102. […] while the theoretical risks of AI deserve attention, the more immediate and pressing concern lies in how individuals with malicious intentions might misuse this technology.
Not trying to prove anything, just found it interesting that there’s someone sitting in the Vatican researching these topics and fitting them into their theological framework.
So I do think this is an interesting point, because I feel like "AGI can't exist" is at least as much of a religious claim as the reverse. If evolution can develop material processes that result in human consciousness, in theory it should be a solvable engineering problem to deliberately replicate.
It's a fascinating thought experiment, absolutely! It just shouldn't be a concept that is taken seriously as a corporate goal or marketing campaign. Anyone raising it in those concepts should be laughed out of town.
oh yeah, I mean entirely in terms of "theoretically possible future development that doesn't require learning Einsteinian physics is fundamentally wrong," not in the robo-Calvinist nerd-Rapture TESCREAL thing
Religious claims are an act of faith and can be disregarded. "There is no evidence for AGI" is a factual statement, it does not require faith, the opposite does.
"There is no evidence for AGI currently extant" is a fact statement. "AGI will never be practical" is speculation based on available facts. "AGI is impossible" seems, to me, to be predicated on the assumption that consciousness involves non-material components that can't be known or replicated.
The issues with AGI may just be down to silicon being a bad standing for biological processes. That doesn't require anything mystical. Or it could just be a question of compute/data/energy resources. Again nothing spooky or mystical required.
Yeah, there's nothing special about meat. Just because your brain is wet doesn't make it a different sort of thing than a silicon brain. The main reason why AGI is a pipe dream is because extrapolating LLMs won't get us there because they're the wrong sort of thing, and we don't know what else to do
mostly agree - but I don't think you'll get to AGI on silicone.
making rocks think is a tough business. I don't think recreating life itself would be easier, but I do think it would be easier to hijack the "existing work" for processing power.
What's irritating is that what they're calling AI is a card shuffling machine, we're giving it credit for the hand it's dealing but the meaning comes from the game that we're playing. The technology currently being marketed as AI does not lead to AGI.
Sure, I'm mostly making a distinction between "really really really really hard" speculative technology vs "literally impossible based on current models of how physics works"
(not to say that "AGI could be hypothetically possible" should be in any way conflated with "a predictive text generator with enough processing power could be a person, nay, a god")
that's very lazy logic, that's like saying atheism is just another religion... the generally accepted idea of agi is mainly championed by vibe coders hope-casting because they're too incompetent to comprehend the nature of complexity or capitalists who want to make a quick buck
computers have no intrinsic system of self motivation (which we get through evolution), either biological or semiconductor based, because they're not being made for self motivation but to follow orders, machine learning tools will just be that, tools..
people have a hard time understanding the conception of sytemic emergence's role in how higher order complexity arises, americans especially because they have this weird cultish idea of extreme individuality and "manifest destiny", it'll happen "because we want it to happen" real bad.........
AGI is a pretty poorly defined term, but I think most people working on AI right now don't define it as conscious but being able to do the work that a normal human could do autonomously
I think a big problem we haven't wrestled with is: what happens if we ended up at a point where it wasn't conscious but we could no longer prove it? Like, at some point it may be impossible to distinguish because we don't know how to define the bounds of "conscious".
I mean, our brains are proof it's possible but I think the current AI model of throwing enough computing power at getting the infinite monkeys with infinite typewriters thing to work is still a very early step.
We've made fusion reactors as proof of concept, the problem is that there isn't enough non-protium Hydrogen, or Helium-3 on this planet to make it practical at scale. The sun proves protium fusion is possible, we just haven't figured out how to make it happen at practical conditions we can control.
It proves that intelligence as we know it is physically possible, given that you have a method to manufacture working brain structures. We're still working on that last part.
abso fucking lutely not. if the brain even is a computer, which is a massive if, it is an analog super-Turing machine with hundreds of trillions of parameters and a state space exponentially larger than that. the brain is latently running a PDE whose solution is certainly transcomputational.
and that's just with a child's understanding of neuroanatomy, assuming the brain is a static network of synapses. there are glial cells, neuronal birth and death and complex pruning mechanics, low-range electromagnetic communication, and an as-of-yet unexplored world of intraneuronal processing.
A Turing machine can do more than a (one stack) pushdown automaton which can do more than a finite automaton. If brains can do something TMs can't do then they're an extra rung at the top of that hierarchy.
Aren’t all transformer based AI systems stateless? Which, to me, seams pretty fundamentally different from all other AGIs we know. I.e. animals with a neuron based brain.
Well, you can pretty simply add state to transformer models by giving them a dedicated scratchpad/memory text. Also researchers are working on models that update their parameters during inference .
Humans don't even possess "general" intelligence; they don't even know what they're working towards
(in the process of learning, we have to actively trick ourselves, to fit the learning into the things we're actually cognitively good at. We don't have a singular general mechanism inside there)
If we do ever achieve "AGI", it'll be nothing more than a glorified mega-LLM dwarfing "regular" AI in the same way that a modern supercomputer dwarfs a MacBook Pro — an impressive leap in efficiency, but ultimately it's the same old shit.
Uh, I think you'll find us meatbags are just bootloaders for AGI. Our new machine god will simply thank me for being a loyal collection of cells with which it can start to plunder the universe.
Have you read "Blindsight" by Peter Watts? It's a fun sci-fi horror that includes an alien race that mimics intelligence in basically the same way current "AI" tech works. They analyze and apply an algorithm to hundreds of years of human transmissions to be able to replicate convincing speech.
Even though it's Sci-Fi, I think it's a really interesting exploration of the topic. Highly recommend, for people who don't quite understand what's wrong with AI.
A conditional probability distribution that predicts a next token based on previous tokens is not AGI. It's just not. Debating whether "agi is theoretically possible" is one thing. Debating whether LLMs are going to give us AGI is like trying to debate if my smart TV is going to result in AGI.
There is no reason to humour anyone talking about landing men on the moon or give someone credit for doing so, it is not an "opinion" to "debate" as there is no actual proof it's possible. "Optimism" is another word for making shit up
The difference is we had a process whos physics was well understood and within feasible engineering to accomplish for going to the moon.
AGI is currently as real as negative mass, in that you can build a mathematical framework for it, but there are ZERO real world processes for achieving it.
Going to the moon is a matter of thrust to weight and sealed environments, and all the numbers fit inside limits of existing engineering.
We dont even know the first step towards building an AGI, to say nothing of how you'd actually align the thing even if you stumbled on the process by sheer luck.
I know you thought you sounded smart, but like most people shouting "AGI is almost here, trust me!" You haven't thought through even the beginnings of what you're talking about.
How would we know if we actually had the first step? Would it have to be a breakthrough in neuroscience?
It seems unlikely that the brain architecture that evolution stumbled upon would be the only one capable of doing all the things humans do
The first step is to have some kind of understanding of what you even mean by "intelligence" in a formalized rigorous way & then a method for accomplishing that programmatically at any level of capability.
So far all we have as example is us, and we have no idea how we do it.
We know it involves information processing, computation (probably) but thats a wide net so big as to almost be useless with regards to determining how to do it in a completely different substrate...we dont even know its possible in silicon.
It might be intrinsic to brains, it might be ireproducable
diamonds are made of carbon, and so is horse shit. So if I keep shoveling more horse shit into this pile in my bedroom, you see, I'll have a diamond in the next year or so.
Is there actually any universally accepted definition for what AGI even means? I don't think there is. It's just a label we will slap on when our AI technology gets good enough.
Yeah I just looked it up and I'm very confused. How is this different from regular AI? What about it is so dangerous? I can't really find any clear answers.
The difference is: it could do not only some tasks as well or better as humans but all tasks.
Some dangers would then be:
- no more jobs for humans
- designing bioweapons is also a task
- those who control it might have unprecedented power
- they might accidentally lose control
I meant what's the difference between AI (which we've been told will eventually be able to do all the things you mentioned) and AGI. Why do we need a new phrase, other than to confuse an already confused populace?
To confuse is definitely a goal for some, but I think people who research these things want to separate the specialized narrow systems from the fully general.
I was 100% convinced AGI would never happen and was impossible and basically my position switched 180 degrees over the last year when I realized I was assuming it would be done using only electronics.
It's just more bullshit hype to get people to help sustain the AI bubble. It's how SV and the VC firms backing so many of these entities have been operating over and over and over.
these tech bros throwing money around don't even know what the scaling law is for these huge models, but i suspect the goal is just speculative profit at the expense of stability anyway.
oh there's definitely a big hole in the ground people are throwing money in, but i can only assume individuals like sam altman won't be bankrupt in the end. Or am i wrong?
It’s fine as a sci fi concept, but the LLMs are just a Chinese box containing infinite monkey typewriters and I instantly lose all respect anyone that talks about it becoming or being an AI
THANK YOU. I can't believe how stupid people are. They're burning down the world b/c they can't distinguish between reality & their (truly dark, anti-human) fantasies, and have convinced themselves they're prophets when they're truly some of the shallowest & most immature adults to have ever lived.
Since we still don’t know what consciousness is or what brought it to be, it’ll remain impossible to build a conscious machine until we do.
Mapping every neurological nerve and recreating the brain on a PCB still probably wouldn’t create a sentient machine, even with the best modern technology.
If Penrose is right, then there are tasks machines can't correctly perform even setting aside consc. If "understanding" is non-Turing-computable then there is a wall computers (meaning Turing machines) can't cross
If this "understanding" is required e.g. for correct NLP then computers are barred
Yeah, sure I hope that there’s some barrier to superintelligence through the halting problem or Gödel‘s incompleteness. But I find Penroses‘ arguments on needing „new physics“ to explain how brains work quite handwavy. For me it seems likely that human neurons do only Turing-computable things.
Neuroscience hasn’t even wholly explained how serotonergic systems *work,* to the point that psychiatry prescribes SSRI’s by the millions based on little more than a hypothesis.
..but yeah, sure, acceptance of the current limitations of our understanding is ‘hand wavy.’ 🤦🏻♀️
One thing that drives me crazy about AI/ML folks is that we don't even know the learning mechanism of the brain - it's not backprop, that's just an engineering invention. I think it would be hilarious if quantum effects turn out to be significant a la Penrose and say WFC guides learning
Let me be be more specific: Claiming that some process we don’t understand needs new physics is hand-wavy in the sense that Penrose just replaces one kind of „we don’t know“ with a different, more speculative „we don’t know“
Not for the standard of AGI that Ed talks about often, but for the sake of super-intelligence, yes.
AGI as most people know it means a sentient, super-intelligent machine. How Ed has often referred to it (the correct definition) is a general AI model that can compute whatever you need it to.
I worry that you can get to a process that outperforms humans in all intellectual tasks (so basically super-intelligence) without either sentience or consciousness. And we might not be able to tell either way.
I think no one is looking at how godels incompleteness applies to LLM / AI in general. I don't think AI will replace us. I think our incomplete nature compliments its incomplete nature. I believe that digital twins based on real-time biometrics that just adjust weights could be a path.
I understand that incompleteness applies to formal systems and that LLMs don't fit that, but the fundamental math i.e. matrix manipulation and vector math should be incomplete. I've seen evidence of it in my art, and I explore it with AI as an intelligence test. These images glitched in generation.
Everything I have seen from AI has told me that we are nowhere near “AGI” and I agree that there is nothing to suggest it is even possible. If you ever use these things they do not do what people promise and there is no real use case! Spending trillions and all it does is kill the environment.
Even if it's possible, would it be useful? Computers are already capable of doing things no human can do, so what exactly would be the benefit of making them do things that humans can do?
If at some point it can pick up the garbage for less than a human would get paid, then capitalism will see to it that no human picks up garbage anymore.
That is not an AGI problem, that is a robotics problem.
It's not an application for AGI. A non-general system can already do it.
So suppose you can make it, suppose you want it to pick up trash, but a specialised system can already do it at one hundredth of the cost, so what's it for?
If it’s actual AGI then any task a human would get paid for could be done by it, and yes, many of those tasks require robotics. And researching better robotics is also one of those tasks.
Also a word that implies an actual mind that's fucking up instead of a random word generator that we really inefficiently made only output grammatically correct sentences.
I mean, why not just call it an "error"? In what way does that not describe what's happening when chat GPT says there's no "R" in "cherry," or whatever?
If we consider it a stochastic prediction algorithm, it‘s not really designed or trained to count Rs or respond with facts, only to generate reasonable sounding completions. So in that sense it’s not an error, more like the wrong tool for those jobs.
Technically, 'There's no "R" in "cherry"' is a valid, natural sounding, understandable sentence so the program's working fine. There's nothing going "wrong", no glitch or bug in the code.
It's companies selling and implementing it as if it can differentiate what is true that it becomes a problem.
Right, the real issue is that it's being marketed and used commercially to do things it's not capable of doing. So yes, if I'm just playing around with its ability to generate grammatical sentences, there's no "wrong answer," but if I ask an airline chatbot for ticket prices, there absolutely is.
Comments
> 102. […] while the theoretical risks of AI deserve attention, the more immediate and pressing concern lies in how individuals with malicious intentions might misuse this technology.
That it's impossible due to etc... "seems to you" and the assumptions are (I think) an assumption you're bringing in.
Continued...
mostly agree - but I don't think you'll get to AGI on silicone.
making rocks think is a tough business. I don't think recreating life itself would be easier, but I do think it would be easier to hijack the "existing work" for processing power.
but also HOLY SHIT IT'S AN ELECTRIC TALKING PARROT
this technology is so fucking cool without the hype. it's what Siri and Alexa should've been.
Plenty of theoretically possible things are practically impossible.
🤷♂️
The existence of brains doesn't indicate that any computer we currently have can do the same thing.
And say it could, what would it be good for?
Justin did not say „computer we currently have“, but I guess all extrapolation is forbidden.
I mean sure we know they exist as a physical object, it doesn't mean we can make one with technology.
I don’t understand that part. Isn’t Turing completeness an equivalence that goes both ways?
It does not follow that it encompasses all possible ways that computation or "thinking" can be done. In fact it definitely does not.
(in the process of learning, we have to actively trick ourselves, to fit the learning into the things we're actually cognitively good at. We don't have a singular general mechanism inside there)
In the context of AI you don't need human-like intelligence to be useful or do smart things.
Indeed most existing IT is very fast and efficient and not at all human-like.
But what useful things can LLMs do is the question, not how they become AGI.
AGI is not possible *yet* and it won't ever be with LLM tech.
Also our hardware is hilariously bad and inefficient compared to wetware. It's like trying to run Crysis on an abacus.
2. (Unclear)
3. Godlike machine intelligence
You sound like the people in the 1880s saying "What's the big deal with 'electricity'? It's just a pretty good gaslamp."
“fully-functional program” is the lowest bar possible
https://bsky.app/profile/tujungadude.bsky.social/post/3lkbahy7wvs23
AGI is currently as real as negative mass, in that you can build a mathematical framework for it, but there are ZERO real world processes for achieving it.
We dont even know the first step towards building an AGI, to say nothing of how you'd actually align the thing even if you stumbled on the process by sheer luck.
And in all likelihood no one in the next few centuries is either.
It seems unlikely that the brain architecture that evolution stumbled upon would be the only one capable of doing all the things humans do
So far all we have as example is us, and we have no idea how we do it.
It might be intrinsic to brains, it might be ireproducable
Some dangers would then be:
- no more jobs for humans
- designing bioweapons is also a task
- those who control it might have unprecedented power
- they might accidentally lose control
The people loudest about this are the ones lacking any understanding how the tech works.
The branding of AI has been successful I guess.
I say that as someone with a much more positive view of Ai than I think you have based on your writing.
Mapping every neurological nerve and recreating the brain on a PCB still probably wouldn’t create a sentient machine, even with the best modern technology.
If this "understanding" is required e.g. for correct NLP then computers are barred
..but yeah, sure, acceptance of the current limitations of our understanding is ‘hand wavy.’ 🤦🏻♀️
AGI as most people know it means a sentient, super-intelligent machine. How Ed has often referred to it (the correct definition) is a general AI model that can compute whatever you need it to.
It's not an application for AGI. A non-general system can already do it.
So suppose you can make it, suppose you want it to pick up trash, but a specialised system can already do it at one hundredth of the cost, so what's it for?
Marge to Disco Stu after he talks about himself in 3rd person: "Who's Disco Stu?"
1. Those who can extrapolate from incomplete data
It's companies selling and implementing it as if it can differentiate what is true that it becomes a problem.