a) Machines have been superhuman at many intellectual tasks for decades. But as long as they don’t have legal personhood, this gets framed and expressed as “tool use.”
b) I don’t think legal personhood is coming soon.
c) “AGI” is going to be a perpetually receding horizon, for social reasons.
b) I don’t think legal personhood is coming soon.
c) “AGI” is going to be a perpetually receding horizon, for social reasons.
Comments
Not so sure about France...
But idk. I have illuminating, wry conversations with chatbots. I'm learning from them and laughing at their jokes.
It makes zero dent in legal personhood, +
But I could be wrong. Let us see.
I don't even like the voices nor like voice input, I think what matters is the ideas in the words
And yet, wow, what a change in intuitions anyway
Job-displacement is relevant, a worry, and easy to understand.
Previous tests haven't actually followed the plan Turing laid out.
1) e.g. relating AGI to "human level intelligence" because what does that mean?
2) e.g. narrowing AGI to specific, narrow reasoning tasks
But part of the reason the goalposts are easy to move is that everyone's attached to that modus ponens script where we start by assuming that there must be some criterion defining our human specialness.
You have to really work to get a modern commercial LLM to tell at you
It's refreshing. It will make fun of you.
I have room in my heart for the skeptics of ai improvement (I even have room in my brain for them, they might be right I suupose)
Even systems that already exist display many features that would seem to be screaming out, "do not call it a tool!"
But I also mean the way models display preferences, aversions & so on through their speech (&, in 4o, images)
But I think that people often confuse the training objective ("preduct the next token") for the internal architecture that achieves that objective (which is very unclear)
You can read what anthropic says:
https://www.anthropic.com/research/tracing-thoughts-language-model
Here on bsky it is easy to find lefties who talk a big talk & then are more chauvanist about speaking machines than the median twitter fascist
It can't vote, but it has every other right at this point.
* Assume no employees, but only 1-2 humans required on the board of directors initially.
With some effort we have all the tech so someone will do that
and that's really a wrong model of the situation. Some differences are definitional or legal.