Profile avatar
daniel-eth.bsky.social
AI alignment & memes | "known for his humorous and insightful tweets" - Bing/GPT-4 | prev: @FHIOxford
99 posts 3,957 followers 36 following
Prolific Poster
Conversation Starter
comment in response to post
But ultimately, I do think you need to do both to convince people. (At least for now - once it’s a high salience topic that, say, 15% of people can talk about ~reasonably, then I’d expect problems on both ends to go down a lot. But we need to get to that point first)
comment in response to post
I get why it’s hard to do both. Avoiding the first pitfall requires not flinching away from awkwardness of arguing something that’s “crazy”. Avoiding the second requires having intellectual empathy to understand where the audience is now & translating the case to their language…
comment in response to post
Lots of people *do* talk about the main thing, but often with front-loading parts that are hard to swallow, without first giving background context to close the inferential gap. Sometimes they used language that’s likely to cause a knee-jerk reaction against whatever’s said…
comment in response to post
they convince their audience of this weaker point. But what good does that do? Perhaps a little, but it’s so weak and removed from the real concern that it probably doesn’t do much. Not clear it even helps much with then convincing them of the main point. Meanwhile…
comment in response to post
I think the AI safety community is largely failing this test (with most people failing one but not both bullets). Many people fail the first by arguing for some weaker point, like “AI is becoming increasingly powerful, and powerful things can be dangerous”. And maybe…
comment in response to post
lol took a moment
comment in response to post
People often think of it as “voters (or American voters specifically) tend to grow bored and/or have too high expectations in general”, and I think there’s *some* of that, but it’s not the whole story
comment in response to post
darioamodei.com/on-deepseek-...
comment in response to post
This honestly isn’t that different from being an upper-middle class kid during summer vacation. “No, you can’t just sit at home all summer watching TV - you have to go do something. Now, your choices are baseball camp, art camp, math camp, lake camp, or any of these other camps”
comment in response to post
So, conditional on AGI not killing everyone and things going “reasonably well”, I suspect there will be a real economy of AI systems, plus a fake economy of human labor for economic redistribution, but where the fake economy may be geared toward activities humans enjoy doing anyways
comment in response to post
Note that make-work doesn’t *have* to look like boring office work. It could instead be things like superfluous childcare, competitive sports, art, etc.
comment in response to post
If you work in AI or follow the industry closely, your reaction is probably “yeah, this ain’t news”. But for those more on the periphery, the narrative “AI is slowing down” has spread a surprising amount. time.com/7205359/why-...
comment in response to post
Didn’t realize he cross posted it, saw it on Twitter
comment in response to post
Doesn’t seem like we need systems that don’t make mistakes in order to get AGI, as humans also make mistakes
comment in response to post
Yeah
comment in response to post
More than others, I’ve expected the public to side against the big labs, and AI in general. But even I would not have expected HN commenters to sing this tune. Was there a schism somewhere that I’m not familiar with?
comment in response to post
H/t to @agucova.bsky.social for the clip. I continue to predict we will see more and more people in the general public waking up, and further that when they do, their reaction will generally be “wait wtf, that is not okay”
comment in response to post
Software engineering being easier to automate than most other jobs means this could happen before most other workers wake up