The only viable use cases of AI - even at today's massively subsidized price points - are to scale up scams, grifts, cheating, plagiarism, and carelessness.
Which makes sense - 80% of everything is shit, which includes training data. AI is trained on slop, so it makes (lower resolution) slop.
Which makes sense - 80% of everything is shit, which includes training data. AI is trained on slop, so it makes (lower resolution) slop.
Reposted from
Baldur Bjarnason
And, yeah, all of these issues are inherent in how we mismanage software development already. What Large Language Models are doing is magnify the flaws in our already inadequate approach to making software
Comments
When you have a stake in the quality of the output, compromising on quality results in a commensurate reduction of value.
But if the output is for *someone else* then only ethics prevent you from making it as shit as possible.
Five years ago you'd drive the cost per hit down by outsourcing to a content farm. Today this is done by an LLM.
But this is not a WRITING use case. It's a grift use case.
Five years ago you'd copy paste someone else's essay and change a few words into synonyms. Today you're getting LLMs to do essentially the same thing.
But it's not a writing use case. It's a plagiarism use case.
Five years ago you'd just lay off a bunch of people and talk about "the year of efficiency." Today you do the same but call it "AI transformation."
The AI isn't doing work, it's giving an excuse.
The siren whisper of "oh, you're looking for 'good enough'? Come right this way."
It's a ripoff of a ripoff.