LLMs don't have morality, they are not moral actors. They know two things: a) reformat search results to look like human speech and b) don't use bad words. Asking them to solve trolley problems is like asking the Chuck E. Cheese band to play Freebird
Comments
Heck, in the very concept of tokenized embeddings this became clear. Because you can do mathematic ops on embeddings and they obey conceptual logic.
Good at making coherent sentences
But coherent sentences don't have to be true
A moral text is not a moral actor. It's not conscious of what it contains or of anything else. There is no entity there, or in a LLM.
Did you mean that the internet is mostly white for the foundation scrapes?
Programmers don't write training data.
They can output text that has the style of moral discussion,
in sort of the same way Republican right-wing shills can,
but it's all Mad Libs to them.
it’s good at that, but there’s no moral judgment being made, nor is it remotely capable of
If I leave here tomorrow
Would you still remember me?
If you start to squint…🤷♂️