LLMs are nothing more than models of the distribution of the word forms in their training data, with weights modified by post-training to produce somewhat different distributions. Unless your use case requires a model of a distribution of word forms in text, indeed, they suck and aren't useful.
Reposted from Hank Green
There are a lot of critiques of LLMs that I agree with but "they suck and aren't useful" doesn't really hold water.

I understand people not using them because of social, economic, and environmental concerns. And I also understand people using them because they can be very useful.

Thoughts?

Comments