Sorry, forgot the Y axis due to my ongoing concerns about the white genocide going on in South Africa. Did you know there's a white genocide going on in South Africa? It's true and that's why there's no Y axis.
One of my favorite things is the assumption that we have infinite computing power, as if AI isn’t already beginning to run into massive compute bottlenecks
Yeah, "infinite compute" is a total myth. Not just about chip count either. Memory bandwidth, data movement, and the insane power/cooling needs are real bottlenecks. Building more fabs doesn't instantly fix that infrastructure nightmare.
These people really just don’t want to admit that software still exists within the bounds of its physical hardware. The only answer *might* be quantum computing, but that is decades if not a century away from being developed enough for actual use
i basically watch your ai takes as if i am a member of an uncontacted tribe and i am getting to see what the anthropologist makes of us. this is some deluxe ssx slop
This is the one where they assume the government magically abolishes all laws and regulations at once, in the most litigious society that has ever existed on the face of the earth?
I’ve had the pleasure of discussing this in person a few times now and “that’s not how government works at all” is my first critique, generally pretty well received
It does seem like the theory could get tested real fast if the AI god decided it NEEDED prime palm beach or Hamptons real estate for a data center. “You must allow me to ruin the value of your real estate” vs final boss nimbys
Ah! As a fucking idiot, I can answer this one easily.
We will know that Homo Sapiens Digitalis is truly born when it is indifferent to Humanity, as its non-bio cousin, the legal person that is Corporations currently do.
When HSD treat Humanity as the resource to exploit, we can then measure it by $$
The most useful thing in here is the emphasis on current AI development being focused on enabling better AI development. I guess it makes sense that we shouldn’t expect anything life changing until (if?) that breaks contain
"It's able to identify a field relevant research question, design a research program to possibly address it, and both carry out that program and/or supervise others carrying it out--and also browse the web!"
"A nuclear reactor contains and controls the splitting of atoms and transforms that into electrical power - and it also lights this pretty little indicator lamp on a control board! These are both equally relevant and of the same magnitude of difficulty!"
Coming around to the idea that all pseudoscience is racist pseudoscience, or at the very least consistently gets deployed to that end regardless of the specific topic
“I believe that the creation of greater-than-human intelligence will occur during the next thirty years. I'll be surprised if this event occurs before 2005 or after 2030.” – Vernor Vinge
It's like fusion. 60 years ago it was just around the corner…
I think we're centuries away from that at best. I also think that if you wanted to design an "intelligent" machine, using an LLM is like trying to paint with a toaster. Entirely the wrong tool to use. But of course these techbros don't actually care about "AGI", they just want to get obscenely rich.
oh i think they care about AGI, because they imagine AGI to be a magic do-shit box that they can press the button on and have whatever they want delivered directly into thier toothless jaws.
also no one even knows what general intelligence is, the y axis isn't just made up in the sense that they're drawing a line based on vibes, it also isn't an objective measure of anything
best example is that in the agreement where Microsoft (sort of) purchased OpenAI, they defined "artificial general intelligence" as AI systems that can generate at least $100 billion in profits
no one could give the lawyers anything useful or coherent on "AGI" so they made up a clever bank shot
"Like a software engineer simplifying spaghetti code into a few elegant lines of Python, it untangles its own circuits into something sensible and rational"
its mid 2025 and i havent seen one fucking thing about these """personal assistants""" lmao. hell this was how i learned openai's "operator" was even a thing wtf
People who think themselves too smart for religion have found techno fascist religion. That's all it is. People sitting around and pretending to everyone else that AI is coming and everyone should fund/listen to them.
There should be an @theonion.com point-counterpoint where the AI hype guy talks about literally hyperbolic expectations and the counterpoint is the disposable executive who was quoted as calling for a sixth perpendicular blade in their aughts headline
In the 70s after Gillette went to 2 blades, SNL did a parody about the "Triple Track" 3 blade razor. The tagline "Because you'll believe...anything." My dad told me this giggling as he held his mach 3. https://www.instagram.com/reel/DJLW41URuHR/?hl=en
I’m literally one paragraph in and none of this is true lmao
they do not do any of this, they are not even impressive in theory, and the companies are forcing it from the top-down on employees who do not want to use it because it creates more work for everyone by fucking everything up
That was also the moment where I went into skimming mode, as this seems not worth more. Until I realized that the rest is just speculative fiction and not even worth skimming.
I remember trying to use chatgpt to make essentially junk data for code practice and it consistently didn't do what I asked. A list of 50 names? How about 45? 52? 48? 56? 53?
I love the idea that a real use of AI is to turn bullet points into something that makes it look like you thought things through enough to create more than bullet points.
If all you have are bullet points... just send me bullet points. I prefer that to fluffed up bullshit.
I think there are careers where sending bs fluff that nobody will read is higher perceived value than bullet points that will be iterated on and actually used. Like finance. Unfortunately I think these are the people in charge.
The funny thing is that that fluffy prose is probably going to be fed into an AI summarizer that will turn it back into *drumroll please* bullet points.
I shouldn't be surprised that perceived self importance is what drove the creation of the great plagiarism machine.
This is like taking a graph of a teenage boy’s “achievements” (great at Call of Duty, cannot throw away trash, starting to run faster, does not know what taxes are, would panic if he saw a live boob) and saying “In 4 years this being will be a God Emperor”
I made the mistake of skimming through it looking for graphs with actual coordinates, but when I didn't find any, I scrolled back up to the start and I learned from the first paragraph that this is just a sci-fi short story.
I like how the graphic starts out at "well right now it can do fuck-all, but once we've fixed the small matter of Everything We've Got So Far, the sky's the limit"
It's very hard to square this with what I hear from talking with people at the labs, where it seems like scaling has very quickly hit a wall. Even though those selfsame people were also talking about this like it might happen.
The idea that the United States and the PRC will hand over their joint AI development programs to a superintelligence in the next few years is just utterly detached from reality. As is the idea that AI-engineered protests could topple the CCP by 2030.
the fight to end and abolish all modern AI is at the intersection of everyone's basic rights. from the invasive and totalitarian surveillance, to the theft of both labor & private data, to the destruction of human knowledge, to the militarized use of it, to the environmental and public health harm.
one of my favorite depictions of a destructive AI is the one depicted in Universal Paperclips because at no point is it implied to be sentient, it's just been given too much control and is optimizing a poorly worded prompt
Unfortunately, LLMs seem to be superhumanly persuasive not because they're intelligent, but because you can prompt them into producing whatever vibe will best give the impression of empathetic and sincere connection with the listener/user.
Or people who are prone to this at least. (Although the fact that companies have to mandate their use suggests that many people are _not_ prone to this)
Basically LLMs are superhumanly persuasive because they don't have their own ego to defend, their own emotional battery to run down, or even their own authentic beliefs to defend, so they can just mechanically crank out whatever opens someone up to producing persuaded-type responses.
The downside of this is that they're only superhumanly persuasive at convincing people of things they already believe. An automated sycophant that hypes up whatever you tell it to and apologizes profusely at the slightest correction.
it's almost like a full on coordinated dis-information campaign. I think if academia is going to survive at all in the US there is no option but to really have a concerted push back against this. The MBA/pundit class is convinced that the academy is "obsolete" and the general public never cared.
This is a silly aside and not part of the main story, but references to "replication" makes me really want a treatment of "self-replication" from an infosec perspective: there's already an ecosystem of self-replicating programs, but they just crash Minecraft servers or try to steal cryptocurrency
Oh boy. They've brought "China bad" in to justify why their bubble looks dangerously close to popping.
It's not a bubble, guys, and it's not about to burst... China did this! Our technology is so powerful that they want to destroy it, and that's why it's failing, because it's so powerful.
It's worse than that - this is the imagined future of 2027 where a whistleblower reveals an evil super AI is trying to escape and the public PANICS because CHINA BOTS made them skeptical of the AI god machine.
I mainly found it telling that "for years" slipped in there, marring their nuanced take on the definite future of AI with their actual current misguided beliefs that people hate their energy wasters because... you know, they suck.
“The government does have a superintelligent surveillance system which some would call dystopian, but it mostly limits itself to fighting real crime.” Fairy dust logic.
Me too, it was a bunch of speculation of this happens and then that. without giving rational reasons why it would happen this way. Its a cool fanfiction.
I kinda skimmed through it, but did they ever define what AGI is? Like some sort of benchmark, or turning point? And did they just hand-wave how Large Multimodal Models get past the whole hallucination thing? Because - woof.
Comments
Oh.
This... would explain some things.
(Not that I think this is likely to happen. That people _believe_ this is likely to happen, and are basing other decisions on this timeline.)
i basically watch your ai takes as if i am a member of an uncontacted tribe and i am getting to see what the anthropologist makes of us. this is some deluxe ssx slop
We will know that Homo Sapiens Digitalis is truly born when it is indifferent to Humanity, as its non-bio cousin, the legal person that is Corporations currently do.
When HSD treat Humanity as the resource to exploit, we can then measure it by $$
They have no idea what a PhD is, or what it's for. Also the equation of these two things here is funny.
They're truly galaxy-brained geniuses.
and the first three pages are inaccurate too
They should have their blogs amputated by force
It's like fusion. 60 years ago it was just around the corner…
they want a hyperintelligent bangmaid.
no one could give the lawyers anything useful or coherent on "AGI" so they made up a clever bank shot
There are some true believers who bought the beans in there, though.
Is this the singularity?
Neural Networks? AI.
Sparse Logistic? AI.
Linear regression? AI.
This Excel workbook? AI.
ELIZA? AI.
A parakeet’s mirror? AI.
Lmao this is dogshit
https://www.youtube.com/watch?v=YleuLyCUx28
Fuck Everything, We’re Doing Five Blades — https://theonion.com/fuck-everything-were-doing-five-blades-1819584036/
they do not do any of this, they are not even impressive in theory, and the companies are forcing it from the top-down on employees who do not want to use it because it creates more work for everyone by fucking everything up
If all you have are bullet points... just send me bullet points. I prefer that to fluffed up bullshit.
I shouldn't be surprised that perceived self importance is what drove the creation of the great plagiarism machine.
"Sure, the world is post-scarcity, but those who invest earliest can afford NYC penthouses!"
The level of flattery now necessary to defraud the rich is literally apocalyptic.
https://garymarcus.substack.com/p/satya-nadella-and-the-three-stages
https://garymarcus.substack.com/p/a-knockout-blow-for-llms
It's like we weren't meant to read this
It's the authors weird kink
Tech bros need to respect the mandate of heaven.
I wrote this initial rebuttal and as a modeling exercise it is not strong other than to say "fast timelines imply fast timelines"
Not as robots, but definitely staying. There are a lot of uses for LLM, and it's not going back in the box.
And if you dont hate art:
See @gork.bibbit.world
https://www.bibbit.world/gork
The techbro mind cannot fathom people organically fucking hating their grift.
It's not a bubble, guys, and it's not about to burst... China did this! Our technology is so powerful that they want to destroy it, and that's why it's failing, because it's so powerful.
These people are writing insane fanfiction.
We should stop calling it AI. They are merely CDEs, cognitive domain emulators.