“That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise. We should in vain, therefore, attempt to demonstrate its falsehood”
IMHO terrible, laziness to substitute working out what observations a given theory lead you to expect & amending the theory in response, to letting a machine find "patterns" & sticking some post hoc rationalisation onto them.
It's easy to get Chat GPT to contradict itself, that's for sure. And it can only be trusted, somewhat, on dry technical concepts. Anything controversial, it can be counted on to present the establishment narrative even if contradictory, probably mostly b/c that's what its search engine runs into.
A does B. C does B. Therefore A = B, is that the implication? Isn't that a bit of non-sequitur? Or are you implying that finding patterns and associations is inherently bad so the mere engagement in that process nullifies all validity and utility of the output?
That's fair. But unlike Astrology, I don't find that people claim that the output of LLMs is the end, nor a scientific endeavour, although that could be my bubble.
For science that's the beginning, not the end. The steps in between are designed to *try* (imperfectly, because the people doing science are humans) to remove bias from the process.
Does astrology do that though? Apparently, if you're a libra, you have a "a unique ability to blend confidence with empathy"—or so the internet tells me.
But isn't that just made up out of whole cloth, rather than being based on the observation that this tends to be true?
Comments
“That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise. We should in vain, therefore, attempt to demonstrate its falsehood”
Jokes on him, ChatGpt says the sun will rise
ChatGPT's result, on the other hand, is basically support for his general theory.
Unless the pharmacist is tall, dark, and handsome.
llm’s seem to wax as poetic as astrologist 🤣
https://youtu.be/dCXgeF0n554?si=xj8w6zFGwVzIlXaT
There is much valid criticism about LLMs and AI but this is not one of them.
PS: LLM model is a redundant acronym. The 'M' in LLM stands for model.
But isn't that just made up out of whole cloth, rather than being based on the observation that this tends to be true?