It's called, "philosophy". Humans have been doing a fine job with it for thousands of years and unless OpenAI is gonna start hiring on a lot more philosophers, religious leaders, and community leaders, no thanks.
Interesting, I just did a post and then I saw this. It feels as though what I posted supplements your post. Nonetheless, correct, great article TC. ClosedAI, is one of those companies I don't fully trust, there are worse but they are by far the most widely recognized.
I’ve indeed tested this excellent @techcrunch.com investigation on Nokia supplying SORM resources for Putin and it did not come a single comment about decency, ethics or morality at anything from those Nokia executives: https://rb.gy/6vz6hq
This is outrageous. A year ago I heard a highly ranked executive of an international corporation claiming “prompt engineering” will replace innovation and that GPTs are going to produce intellectual property driving profits up. This was a person making decisions impacting millions of people.
Is there even an accepted and coherent interpretation of morality? What would be a “morality” benchmark? Should we use something more specific and less general than the concept of morality?
It’s a philosophical question. And it has always been. Techno solutions to philosophical questions have shown to fall flat on their face. It’s pseudoscience at best, and most likely hype manipulation.
Comments
Ah, yes, I'm sure that the now ClosedAI developing AI that will act as a 'Moral GPS' for humans is a perfectly good idea.
This sounds dystopian.
Tune in next week when we try to solve metaphysics with an air fryer and in 2025 when we expect to solve epistemology with an iPad