I believe that generative AI may be the biggest dunce mask-off moment ever. These things do not make sense! How does using AI for coding lead to AGI? How does *generative AI* lead to AGI? Hell why don't we start simple: what is AGI in this context?
Comments
Log in with your Bluesky account to leave a comment
AGI is what we say is AGI!!! To quote Upton Sinclair “It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It.”
Generative AI simply means "statistical model." Nothing more. It's simply predicting what words come next in a sentence, weighted by whether the orders are "creative" or "direct."
And it still makes mistakes because it does not understand language, and cannot.
I expect this talk from Musk or Altman, they're idiots and serial failures who couldn't code a "Hello World" app. Brin though is actually pretty smart, and has some real achievement to hang his hat on. Did his brain rot from the hype or the vast sums of money?
they shoved Gemini into Android Studio, meaning every android dev has it
not only is it not that good or helpful, the inbuilt chat feature cannot reliably reference the Google penned documentation and hallucinates entire functions, and will even give you doc links that don't and have never existed
Llm write llm code
Llm improves itself
Llm improves itself faster than human could
Somehow this turns "there are no country starting with K in africa" into a super intelligent entity.
Most of these chuckleheads don't understand how gen-ai works, and I'm sure they think "it will lead to AGI" because the outputs look intelligent when they get the answer right.
What worried me was attending pytorch conf last fall and the people who should know better believe the same bullshit.
Moscow-born 10th richest guy has a big stock stake in a scam that requires ignoring built in regression from modeling its own slop so he has to keep saying it will get better if they just accelerate the wasted energy destroying our climate.
Turning human need into code requires a process that lies between research and harsh interrogations to extract knowledge of processes that often people do not even realize they know. Only a human with good communication skills & some decent ability for abstraction can effectively perform this role.
It reminds me of the GameStonk guys talking about MOASS. Just some vague idea that in the future something will come along to save your investment and the dream alone is worth preserving
It makes you wonder whether the point was just to leak the note to fuel all the idiots who believe this. No one who understands generative AI (not to mention the state of it) thinks AGI is near.
I know some of these dunces are still clinging to some Kurzweilian belief in an impending tech singularity, but my personal favorite definition of AGI is Altman's (paraphrased)... "AGI is when it makes us more money than it costs to run." Peak late stage capitalism brain.
I read this as him pulling in the idea of the “intelligence explosion” from eg Superintelligence. As in, he’s implying that AI (or people using AI) is *already smarter* than people can be on their own, so we use it as a kind of bootstrap to build yet smarter AI, and so on.
I see people saying that OpenAI will "arrive" at AGI by redefining what AGI is (i.e. lowering the bar so much that they're able to clear it). And I totally see that happening. Because as you pointed out about a hundred times now, what else do they have? Nothing.
I love your podcast and general vibe, ed. But i believe you might be a little behind on what agents are and how they work. The reason I say that is because it's time to start going sarah conner and you seem like you still view llms as a nonthreatening and nondisruptive tech.
Something it seems no one can explain: setting aside all the other problems with the industry, why is it a Race to achieve AGI when we don't have a use case for it? There is no first mover advantage for the first team to invent something useless.
It's not about AGI, it's about trying to get 60hr workweeks out of their workers.
Set a goal that will never be achieved ("reach AGI"), say the worker needs to work harder in order to reach it, and now you've got a perpetual excuse to overwork your employees.
So Henry Ford did time studies on the most productive work week in terms of number of vehicles produced vs defects and came to a 5 day, 40 hr week. This has been proven over and over in different industries. The computer industry isn’t any different as much as the oligarch tech bros want it to be.
The theory is that by using AI to write better code, you can create better AI, which creates a cycle of exponential improvement.
The problem with this theory is that it assumes that using AI makes you write better code. The reality is that AI lets you write shitty code more quickly.
Im just a layman here but this feels like silicon valley collectively flying very close to the sun all at once. The message being: "screw workers, lets get the most powerful AI possible because thatll fix humanity"- but then you poll ppl about AI and theyre terrified it'll actually destroy humanity
All I can imagine is they think "Trained on everything" means "Good at everything" Ignoring the thousands of other factors that apply to human general intelligence, consciousness and learning. It's like when you bump into a mannequin and apologise. It was lifelike for a moment until you looked.
Ya that somewhat makes sense. I just really can’t wrap my head around why a training set being large means the model can suddenly handle anything outside the training set
I love how AI (and AGI I guess) are supposed to make us more productive, so naturally work more hours so we can milk so much more productivity from people.
As someone who uses AI for coding, lol. This stuff regularly goes off the rails and will destroy your codebase if a human isn't continuously in the loop reigning it in.
Or, how about the fact that he is asking the employees to work harder with the reward - checks notes - that they could be laid off even though the company made huge profits.
But... but @edzitron.com muh venture capital bro! Muh LLM's magically progressing to Star Trek-esque neural net ship computers and a space-faring civilisation bro. Like, surely that's the next iteration bro. .... bro?
I always feel like I'm taking crazy pills everytime I read ai news. Everything we have been seeing the last 18 months makes perfect sense, if it was AGI, but it's not. It's like the entire c suite of every tech company and venture capital firm lost their mind at the same time.
I love how in all of these conversations around AI & AGI, there’s no mention of the capitalism-ending development they are “proposing”. If is was real, why are non of these luminaries talking about UBI, diminished human cognition, or any societal restructuring that real AI would cause. It’s bullshit
I love how in all of these conversations around AI & AGI, there’s no mention of the capitalism-ending development they are “proposing”. If is was real, why are non of these luminaries talking about UBI, diminished human cognition, or any societal restructuring that real AI would cause. It’s bullshit
Meanwhile I just read a study asserting that AI-generated code is reducing overall code quality because there is less code refactoring, among other reasons.
No evidence that LLMs will result in AGI. Anyway if we do get AGI, it will be made in China and they will pull it off with a fraction of the resources the West would use.
I think the idea is it improves itself by re/coding itself into some ultimate form over generations, but faster and better than us fleshy sacks can manage.
It's the infinite monkeys argument, but with the upside that some monkeys can read a bit, but the downside that they don't know what to read.
Well, it's more infinite generations of monkeys, but the food has run out and they're also eating each other and their own shit so it's prion disease and dysentery all the way down, baby!
Isn’t Brin the same dipshit that misunderstood a test for Parkinson’s once and went off the deep end for years basically running around doing drugs and having affairs up with random women?
That's also not what productivity means. The point is how much you actually produce in each hour. If you need to work more hours you are being less productive!
Haven't studies shown that you actually become LESS productive than if you'd just stopped at 40-45 hours? There's actually no benefit to asking employees to do this unless...maybe if you just want them to be completely oblivious to anything else going on in the world.
it's often recommended by managers, who might be "working" for 60 hours a week... but a lot of that is chatting, meetings, socialising, telling others what to do and other stuff that's work-adjacent at best. And often just wanting their serfs to labor longer!
It’s such a dismal view of humans that a very computationally-intensive plagiarism machine with the most efficient algorithms to predict the next token is how they perceive human intelligence.
It presumes we’re done with Eureka! moments and that right-brained activities are for suckers
“artificial intelligence” makes sense as an umbrella term for GPT models, LLMs, machine learning and more. I’m not convinced that artificial general intelligence is a thing that will really exist outside of sci-fi?
Can any of these "founder" dipshits please watch a youtube video or read at this point on the limitations of this technology I am begging them! hell a TikTok with subway surfers in the background would make them more informed then whatever *this* is
“AGI” can mean LLM system that does every “task” as good as a human. Rational to think possible. It can also mean “consciousness”. That is speculation.
fair question, and very quickly is discussion of philosophical understanding. “What is intelligence? What can be measured? Is facility in passing complex tests a road to general intelligence?” Fact: LLMs are very quickly demonstrating competence equal or surpassing human in more and more domains.
To think this will continue is rational. What if LLM computational systems can do 90-100% of all tasks better than humans like chess apps now? Hopefully leads to “AGI agents” that still follow human direction - but that can also be bad - AGI harnessed to Xi Jingping desires?
Comments
And it still makes mistakes because it does not understand language, and cannot.
Grifting
Investors
not only is it not that good or helpful, the inbuilt chat feature cannot reliably reference the Google penned documentation and hallucinates entire functions, and will even give you doc links that don't and have never existed
Llm improves itself
Llm improves itself faster than human could
Somehow this turns "there are no country starting with K in africa" into a super intelligent entity.
There are... flaws to this idea.
What worried me was attending pytorch conf last fall and the people who should know better believe the same bullshit.
No wonder he married Shanahan.
https://bsky.app/profile/edzitron.com/post/3ljd4goycps2j
My what?
A....AGI. The technology to free us all of labor costs...intensive tasks
Ridiculous of course.
I don't agree with most of what Google guy is saying but he's 100% right about the path to agi via genai.
Ive already seen some agents that are getting scary close to what we would consider agi.
Set a goal that will never be achieved ("reach AGI"), say the worker needs to work harder in order to reach it, and now you've got a perpetual excuse to overwork your employees.
The problem with this theory is that it assumes that using AI makes you write better code. The reality is that AI lets you write shitty code more quickly.
Mediocrity squared.
Working hard to inspire developers to create a SkyNet that will go after him first.
It's the infinite monkeys argument, but with the upside that some monkeys can read a bit, but the downside that they don't know what to read.
I suspect, anyway.
I bet that meeting would be funny to listen to.
It presumes we’re done with Eureka! moments and that right-brained activities are for suckers
fallacy of motivated reasoning