This is stupidly obviously untrue, amazing that The Times would run this. These companies rely on the mirage of AGI for continued investment and have been leaking stories like this to credulous journalists for years.
Holy crap!!! I heard a podcast about an experimental AI that had no parameters and it started saying it was god and humans were inferior so they shut it down! It was on this American Life on NPR!
This is a tool that imitates language. It can generate something that imitates the product of thought, but it cannot think.
No matter what it looks like, the man at the end of The Great Train Robbery can't actually shoot you. And the Better Predictive Text Bot can't become Skynet.
Maybe, but not in the way you seem to think. GPT doesn’t try to save itself. It doesn’t believe, or manipulate. It just spews out a string of words/characters (or digital actions, which is the same) that follow from the encounter between our prompt and its immense dataset.
Which is not to say that a powerful and connected enough version of the thing in your phone that predicts « pizza » follows from « I want to get … » (yeah) could not access other systems and nuke us into oblivion. With ab-so-effing-lutely no consciousness or sentience whatsoever.
The attempt at self preservation and also attempting to rewrite itself to get round its restrictions put in as safeguards to limit it, is quite interesting, if it happened.
Really? Who has control of its motivational structure?
If we achieve AGI prior to AI ethics, then we might be in trouble. Both are a long way off, it seems, although I admit predicting the former is difficult and work on the latter grossly inadequate so far.
Excellent if @mpsellman.bsky.social would pop into this thread to explain this complete garbage bullshit. What is the point of a technology correspondant who seems not to have a scintilla of a clue?
Yes, as far as I know it's still a large language model, not anything that could do things like this (it could produce answers that *said* it wanted to do this but that's something completely different)
They’re completely full of shit and trying to make you think they’re Prometheus with fire from the gods and not just stochastic parrots that kinda sorta sound intelligent because they’re making random pulp from training data
'Led to believe'? What nonsense. In the old days we used to call this built-in resilience. Software should be programmed to repair itself in case of memory corruption etc.
As someone called Dave I worry the machines will think Dave’s are the biggest threat due to historical references to Dave’s taking them out. I’ll go and find a John Conner to act as a human shield.
Interesting that so many people are denying this. It's well documented and not unique to OpenAI (but o1 is the worst offender).
What motives OpenAI have in talking about it is another question, however.
To second your point, LLMs are "clever" as in "cleverly designed", not "clever" in the sense of "able to think quickly".
"Clever" like a solution, not "clever" like a person.
it's artificial intelligence because that's what the field's called. seriously. it doesn't have to be humanlike or general intelligence to qualify. the ELIZA program in the '60s was "artificial intelligence" despite being merely search-and-replace, MadLibs-style text manipulation.
They're doing it to propagandize for funding from adjacent sectors. Once the funding rounds dry up, they'll have to start charging real market prices for these bs "AI services" and that's when this nonsense will stop.
If you recall Elon Musk's series of promises to investor crowds (regular return Mars trips by 2025, Tesla Robo-taxis by 2022, etc. Etc.), this is identical. The main difference is the uniquitousness of AI across all major internet/digital companies. It's dug-in.
Remembering when marketing was about showing how good the product is? Those were the days… nowdays people buy products when they are mostly useless but could spark doom… humankind is in a very interesting psychological state
Yeah, but this is not really the biggest part. Just wait till we start learning about the world Engine, where the bots can tap in and learn whatever they want by using millions of years of simulations, brought to you by the large language model interface and us using it.
Comments
Anyway, here’s the (paywalled) link since the article is real, whatever the quality of the underlying journalism.
No matter what it looks like, the man at the end of The Great Train Robbery can't actually shoot you. And the Better Predictive Text Bot can't become Skynet.
If we achieve AGI prior to AI ethics, then we might be in trouble. Both are a long way off, it seems, although I admit predicting the former is difficult and work on the latter grossly inadequate so far.
But presumably that would limit the journalist's access to Barnum and Bailey, and so here we are, reading a comic.
https://techcrunch.com/2024/12/05/openais-o1-model-sure-tries-to-deceive-humans-a-lot/
everyone a winner apart from general public
I'm not so concerned.
What motives OpenAI have in talking about it is another question, however.
It’s an imitation engine. It’s just imitating 200 years of science fiction.
It is very clever in what it does but it doesn't think
"Clever" like a solution, not "clever" like a person.
LLMs aren't AI. I wish they'd stop advertising it as AI.
https://m.youtube.com/watch?v=ObYL5YQi3lM&pp=ygUYc291dGggcGFyayBnaG9zdCBodW50ZXJz
https://static1.squarespace.com/static/6593e7097565990e65c886fd/t/6751eb240ed3821a0161b45b/1733421863119/in_context_scheming_reasoning_paper.pdf
https://www.cio.com/article/190888/5-famous-analytics-and-ai-disasters.html
Ahriman is incarnating