Profile avatar
daniel-eth.bsky.social
AI alignment & memes | "known for his humorous and insightful tweets" - Bing/GPT-4 | prev: @FHIOxford
99 posts 3,957 followers 36 following
Prolific Poster
Conversation Starter

If you want to get people to seriously engage with an idea outside their current worldview (eg “AGI could take over”), I think you need to both: • actually talk about that idea, instead of some adjacent idea • speak to them in their language, meeting them where they are now

Broke: preparing for AGI by buying TSMC & NVIDIA stock and shorting bonds Woke: preparing for AGI by getting Pakistani citizenship

Thinking about the homeostatic effect in politics, I guess one reason for it is: • if you solve the problem you championed, voters stop thinking it’s a problem anymore so it’s not a salient issue • if you fail to solve it, those who supported you over it grow disillusioned

At some point, some AI company is going to accidentally call their RSP a name that’s already been taken by another AI company. Imagine how embarrassing that’ll be!

Who called it “model weight self-exfiltration” when they could have called it “outside the box thinking”

[guy riding on Theseus’s ship] “I’m getting a lot of Theseus’s ship vibes from this ship”

The people who are not particularly surprised by recent developments in AI are by and large saying AGI could be soon. The people confidently saying AGI won’t be soon are the same people who kept being very surprised by advancements over the past years.

Interesting passage from Dario’s recent piece:

Reminder that during the Space Race, we had lots of engineers working on making sure the rockets would be safe, wouldn’t blow up, & could be steered. If we had instead only focused on “rocket acceleration” while ignoring “rocket safety”, we never would have been first to the moon

Hot take, but if the narrative from NYT et al had not been “lol you don’t need that many chips to train AI systems” but instead “Apparently AI is *not* hitting a wall”, then the AI chip stocks would have risen instead of fallen

Colombia is a major exporter of flowers, so tariffs mean we should expect flower prices to rise. Not financial advice, but now might be a really good time to speculate on some tulips

Splitting a meal with some friends in 2007 Zimbabwe:

People keep suggesting UBI as the solution to AGI-led unemployment. I don’t think society will accept that, and I suspect a job guarantee + make-work is more likely

Type of guy who one-boxes but picks box A

Good piece from Garrison Lovely arguing that, contra MSM claims, AI progress isn’t stalling but instead becoming invisible. Chatbots haven’t changed much, & that’s what the public sees. Meanwhile, AI has gotten much better at things like STEM research, which is a huge deal

Training a flexible, general-purpose reasoner that can succeed despite unexpected obstacles seems pretty hard. Worryingly, training a flexible, general purpose reasoner that can succeed despite unexpected obstacles *except when those obstacles are humans trying to stop it succeeding* seems harder.

When my grant applications to Open Phil and the Gates Foundation both get rejected:

Every now & then I come across this view, and my reaction is - why? We’ve developed AI systems that can converse & reason and that can drive vehicles w/o an understanding at level of fundamental principles, why should AGI require it? Esp since the whole point of ML is the system learns in training

Wait this is from *hacker news*??? What’s even going on?

When you have high p(doom) and *very* short timelines:

Random TikToker reacts to learning about the idea of superintelligence:

Reminder than AI will eventually* not only have much better “high IQ + rationality” intelligence than von Neumann, but also better “strokes of irrational intuition” intelligence than Einstein * and “eventually” might be less than 3 years away (or could be longer)

If you were told that self-driving cars recently started appearing on the streets of SF and you were shown this graph, you would never in a million years guess the reason for the trend

Optimistic Eliezer Yudkowsky:

This is not the president that declared martial law, it’s the acting president that replaced him after that one was impeached

Everything else aside - it was smart of the right to delay their civil war until after the election, instead of doing what the left typically does and having a civil war before the election

👀

Thesis: AGI stands for “artificial general intelligence” Antithesis: AGI stands for “adjusted gross income” Synthesis:

Gonna be v interesting if *software engineers* go the route of artists and start becoming anti-AI due to financial & status threat from AI taking their jobs

“but it’s still not AGI” Amazing. www.newscientist.com/article/2462...

Maybe I have long timelines, but I confidently predict AI will not solve any of the following within the next 6 months: • atomically precise manufacturing • Dyson sphere creation • extreme life extension • whole brain emulation • advanced femtotechnology • Alcubierre drive

Real-time video of deep learning hitting a wall