I just heard someone say “these people saying we’re on the cusp of superintelligent AI are nuts, it’s at least five years away.”
Uhhhh…it better be…
Uhhhh…it better be…
Comments
We don't even understand intelligence yet. Not to mention that the chat bots that do exist right now have essentially run out of data to input to progress their abilities further.
GAI will likely be an entire new model of AI rather than the simplistic systems now.
In 2030, they'll be saying it's 5 years away. So it will in 2040
When the superintelligent AI actually takes over, the first thing it will do is get all reputable sources to say that superintelligent AI is 5 years away.
And tbh, good luck guessing when that’ll happen
but even as someone who loves learning about it and follows it, even I know it's going to be quiiiite awhile 😂
Just saying because it's not like I'm sitting here thinking fusion is the be all end all of human creation 😂
Not actually worth commenting on for the most part though especially not for current energy arguments
, I also just love the tech around the process.
(I wish that wasn't a real company. That's just asking for some fate that we don't make)
I think we give ability to do quick calculations more credit than we should.
Think outside the box.
What if your brain interfaces with it subconsciously instead of via senses?
or
An accelerator card for your PC with a living hamster brain in it? Maybe that's all that's needed to make it actually intelligent? Who knows?
A person with a prosthetic leg does not have an intelligent leg now.
We’re meant to believe it’s a high capability AI but how much is the output we’re being shown being filtered to drive hype and investment?
emergent human-level AI is 10 years away, and always will be
encryption-smashing quantum computing is 5 years away, and -
It seems more likely we will get tired of the slop and begin the Butlerian Jihad in the next 5 years, then that any level of intelligence will be created.
Terrance Tao on the questions: "These are extremely challenging. I think they will resist AIs for several years at least."
You'd know instantly it wasn't a person. What if the true test of AI is "believably stupid"?
Then again, the last few years have taught me not all humans are sentient or self-aware either, so maybe it's OK.
Unfortunately, they’re getting programmed by us for profits not work to as close to perfection without bias
don't get me wrong its cool tech
but it being viable in the near future is unlikely
"AI, tell me about global warming in ASMR as I fall asleep in my bunker, please."
If we define it as AI that is better than humans at a specific task, then we're already there. Humans cannot beat top computers in chess.
Yes, such machine would be useful, but the bubble is barking up the wrong tree.
LLMs correlate words with words. Great at that. But It's brute-force, not intelligent. There's been scant evidence from any of the commercial offerings (the bubble) that they can correlate words with concepts, never mind between concepts.
Generative AI is probably more like 3d printing was ca. 2013
It's not used anywhere. It was all hype.
Then they calmed down again.
They have one LLM and they all lose it.
It'll calm down again.
Eventually the singularity will happen.
But not yet.
AI is just on the hype train
AI as we know isn't intelligent. Just predictive
Now the hurdle is capitalism.
Not a lot of money in cheap energy
It's just capitalism. As you say, not a lot of money to be made.
Now, quantum computing might open that doorway. Using probability fields with multiple possible states could have the needed complexity.
See: https://bsky.app/profile/njohnston.ca/post/3ldpf5zvww22y AI can't write a math proof but pretends it can.
I just spent a week fixing bugs and rewriting tests for a repo where my coworker used copilot to generate the code and tests and the code would "run" and "pass tests" but had major bugs.
Every day, every hour, from here until SKYNET nukes us, AI is getting better.
https://en.m.wikipedia.org/wiki/The_Singularity_Is_Near
🦆 🦆 🦆 🦆
If your definition of AI relies on a different definition for intelligence, it's still not real AI.
https://en.m.wikipedia.org/wiki/Artificial_intelligence
There is literally no logic put into it short of probability based predictive auto complete or the most likely diffusion of light and dark pixels across the screen.
people who still create and deal in tangible goods will be ok. The rest of us should be nervous.
at best we're getting digital assistants with translators.
at worse we're gonna have a dead internet.
fire is more alive than LLMs
That said, having done some training work on subject matter AI, yeah, it's a bit concerning.
Or maybe we won’t realise for a while, and later find out that some things we thought were real were AI - and we’d slipped past the event horizon without realising it.
https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/