So partly this is a theological argument?
I would very much like to see your analysis of how the design of these AI models is targeted to get users addicted. I have seen a lot of discussions (pro and con) of the designs but have never seen how the design is set up to achieve this.
I would very much like to see your analysis of how the design of these AI models is targeted to get users addicted. I have seen a lot of discussions (pro and con) of the designs but have never seen how the design is set up to achieve this.
Comments
So maybe these problems are growing pains?
I understand using some of this technology to try and understand dark matter, or achieve nuclear fusion.
But all of these companies are trying to tell the public that they won’t have to think. They can just trust these models. No risk, no boredom, no disagreement, just pleasure.
As the tech becomes widely, cheaply available it *can* empower people. Should we trust them to use it to do good things?
These companies are trying to make computers like people, but they’re making people like machines
Poems are meant to be felt.
Every general public use case for AI is “you’ll get to have fake black friends.” “You’ll get to put yourself into movies.” “You’ll won’t have to read that book.”