As someone who hates SCRUM but is stuck living in it, I'm horrified.
When you combine "Let's make business break things down into very small 'user stories'" and "Let's make the devs use our LLM" you get crap work and an obvious attempt to carve the devs out.
Functionally illiterate people write me wonderful e-mails at work and then stand in front of me like traffic cones, unable to articulate basic sentences and ideas.
I respect your thoughts on this, but I don't think you can even talk about "AI" as a single thing. There are a wide range of mathematical techniques, data gathering, and use cases all wrapped up in this vague term "AI". I think we need to start being more specific when we talk about it.
I mostly agree, but I'm curious - what use do you see in art? I presume you don't mean fully generated pictures/music/videos, but something like a filter, automated background removal, etc?
The best use case I’ve found for it is “I need shitloads of dummy text for testing very quickly” and that happens almost never and I have other means of getting that.
Machine learning is great, but also requires people to 'train' software on a very specific task. There are so many cool use-cases. However, LLMs and the art generators are literally just ML brute-force plagiarizers that DON'T have a specific use! And none of it is sci-fi artificial intelligence.
I would add that it's being applied to things it has absolutely no capability for, just because it produces reasonably human-sounding blocks of text claiming that it can.
I've seen people ask it to do an analysis of information in a prompt. It will spit out numbers and paragraphs as if it did.
But it hasn't done anything but produce text and numbers that looks like an analysis. And through an accident of language and training data, sometimes it happens to seem ok.
Yeah I'm more or less in this boat. The societal impacts in all sorts of areas seem to be something we are not at all equipped to deal with -- from disinformation, to harassment, to the ability to collect information super efficiently, to the rapid undermining of formal education.
Throw in the obvious environmental impacts, especially when we’re actively trying to solve those against a clear and present danger - and the argument let’s let AI fix this, AI will tell us how, just trust - is shown for the hubris of a potentially dead civilisation.
Tim's & Katie's takes align with mine. There's so much hype around this stuff and seemingly so little concern being paid to the ethics of it. So far, just another tool to help capitalists grow more wealthy by not paying people seems to be where it's landing, unfortunately & not surprisingly.
And, I say this as someone who uses it quite a bit in my work. LLMs can be genuinely useful tools. We need to figure out how to fairly compensate the humans who created the original knowledge and we need not to rush headlong into slashing workforces in favor of unproven tools.
Further, I'd suggest it is effectively doubling down on our problematic insufficiency in dealing with these same impacts from hyper connectedness, media rich content, and self-reinforcing addictive algorithms
This is the same as the internet, though. We still haven’t figured out how to inure society to every village idiot talking to each other and forming a political movement. That doesn’t make it not useful for other things, but those things are not the same as the original sales pitch.
I tried looking up an historical quote the other day and, rather than returning the quote, it fabricated an entire vintage movie that used the quote in an unrelated bupkis context. It not only didn’t provide the good info, it created and delivered bad info. No optimization is worth losing the truth.
Im very mixed on this. Because we saw the internet as that too and despite how poorly handled some things were I feel like we have mostly gotten a grip on that in recent years.
But also AI could be a different beast entirely. Its a difficult conundrum
I really wish people would consider that outside of these absolute worst use cases, the underlying technology is actually an accessibility tool that has given mute and nonverbal people the ability to communicate.
Like, five years ago, if you were just a nobody who talked a lot about politics on social media, realistically speaking it wasn't ever going to be an issue for you crossing a border. Now? It's very very easy to police & target absolutely anyone's speech, at scale. LLMs make that possible.
More critically, it’s not just that LLMs make it possible to dig through all the text, it’s that they also get their answers wrong depending on how you ask. They can be made to give the answer the user wishes with the right tweaks to the question. And they’re 100% confident either way.
I’d be very interested in documenting government use of LLMs for immigration processing. Can you point to any sources? As far as I have seen, they are using simple word or phrase searches to bar/deport people. Also apparently how they are terminating federal grants—keyword searches, not LLMs (yet).
how do LLMs make that possible, more than just "statistical models" and "personalization"? What you seem to describe here is "surveillance capitalism", which is totally a thing and been a thing since well, Google, then Facebook and the rest of social media. Unrelated to "AI" and the latest LLM wave
Comments
When you combine "Let's make business break things down into very small 'user stories'" and "Let's make the devs use our LLM" you get crap work and an obvious attempt to carve the devs out.
Fuck AI.
Even the companies building/pushing "AI" seem to be glossing right over some genuinely useful applications to sell it as a panacea.
2) The writing is not as good as a competent writer. It’s okay, not good.
Between the two, not very useful.
Given the environmental impact and ethical issues, not worth it.
People misusing the tools is bad.
But what I really hate is the people who deliberately, for profit and personal gain, mislead people about what the tool can do and how it can be used
But it hasn't done anything but produce text and numbers that looks like an analysis. And through an accident of language and training data, sometimes it happens to seem ok.
I cited that in a blog yesterday, arguing that our uniqueness and weirdness is the *good* stuff.
https://twobitrye.com/2025/05/06/embracing-weird-part-1-fear/
But also AI could be a different beast entirely. Its a difficult conundrum