Well, it's been a couple of years now, and as far as I can tell the only use case for genAI has been to accelerate the descent into dystopia by allowing fascists to pretend it's not them making their monstrous decisions.
Comments
Log in with your Bluesky account to leave a comment
I have one! There's a use case in letting people who've lost the ability to speak to customize a speech generator using recordings of their own voice which is neat
But that's I think the scale of good use cases here. Small individual uses where you can truly claim ownership of the inputs
Its overwhelming use case is increased accessibility and democratisation of knowledge. The common fear it will replace artists and authors is fear mongering. Though I'm sure it could generate episodes of EastEnders without anyone noticing.
You're a fucking idiot. Chatgpt knows nothing. By design, it cannot know or care if any of what it says is true. It is already in use to replace the purchase of stock images and by design it plagiarizes art of all kinds.
“Democratisation of knowledge” is genuinely embarrassing. There’s nothing generally stopping folks from learning how to do a thing or learning about a thing other than a lack of willingness to put in *some* effort. And it can be done without stealing the work of others.
You don't see how translators, voice recognition, text to speech & search engines etc, help people access knowledge?
AI isn't about stealing artists' work. Any art produced by AI will by definition be derivative & lack humanity. If that's your art you're in trouble anyway
Democratisation of bullshit, more like. Like, I'm sorry but with how utterly routinely AI gets simple, easily verifiable facts wrong, it isn't democratising any sort of knowledge.
ATM that's largely because people use the wrong AI for the job. Think of it as replacing Google. It's astonishing how many people can't Google properly. Semantic search solves a lot of that. Add translation / text to speech / speech to text.
I say this with the greatest possible respect, but: you do realise we already had tools for all of the above *before* AI, right? And in the case of Google search at the *very* least, the tool was significantly more reliable before AI. So, at best, this is a solution in search of a problem. Sorry.
Yes, and translation / text to speech etc have pretty much always used a form of AI, though what we consider AI changes as the tech advances. I don't think people really appreciate how much AI we use day to day.
I agree with you on Google as a product, but that's not really what I meant.
But AI is not "replacing Google" when Google is also enshittifying itself with it?
Search engine quality is actively degrading as a result of AI integration. It's not just people doing things incorrectly or choosing the wrong tool for the task. The AI tool itself ruins the UX and garbles semantics.
I mean replacing Google as a tool when people want to find things. Plain English is better than keywords+ modifying symbols and that's where we're going.
As an aside, you sound like you might prefer https://startpage.com Google/Bing backend, but strips out AI and personalised /targeted ads and trackers.
Hi! As a disabled person who needs accessibility, let me disabuse you of the mistaken belief that AI is doing anything other than shortchanging us. The quality is atrocious, and it's not acceptable, just subpar bc y'all don't wanna do basic comms, like providing ALT text and accurate transcription.
Also, it's not "just fearmongering" when jobs are cut and slop books are taking spots in libraries instead of crafted books that were written and edited by living creative humans. In fact, some slop books are so wildly inaccurate that their content could literally kill someone. It's not great!
Not only that, but the process of writing itself is information synthesis, it's the act of thinking, not simply the output of the act. When we ask a piece of tech to do that for us, we're losing the ability to think for ourselves.
I don't disagree with that, but it doesn't mean it's not useful. Not all writing is art. And the generative part is frequently making something more understandable. An instruction manual for instance
I'm a professional instructional designer, and I disagree that the work I put into making accessible and digestible training can be easily replicated by AI.
I was sceptical from the start. I said this technology is either useless, an offence against human creativity, or actively malicious. I was told to give it time, that it had its place, that it was just a tool and people would develop ways to use it ethically.
I'm pleased to report, however, that my initial assessment was 100% correct and the technology has made the world objectively worse for literally everyone. It should never have existed, but since it does, no one should use it. If you do, you're siding with the enemy.
If you don't train it on a database of stolen work, you sidestep one of the big ethical issues, but I still think if you use that to shortcut the creative process, you're cheating yourself.
Here come the AI defenders. I'm an artist and an author, so let me tell you this from firsthand experience: it is harming our livelihoods. In a world that already undervalues human creativity, a machine that automates us is an existential threat. You are choosing to eat slop.
You are choosing to actively hurt the people who make the things you love, who design the environments in which you live, who in some way contributed to the appearance and function of every made object you have ever held, because it involves less immediate costs to you as an end user.
And the output is demonstrably worse and, despite assurances to the contrary, is not noticeably improving. It's a dead end for mass production of text and images.
One of my colleagues is worried that it’s going to destroy our department. We’re in mathematical sciences, and people are hooking LLMs up to proof checkers. There won’t be any point in “doing” maths, because the computer will do it quickly and check its work
There has been a part of me wondering like how many organizations are just doing what they wanted to anyway and SAYING that a robot told them to? There's no way to check.
Comments
But that's I think the scale of good use cases here. Small individual uses where you can truly claim ownership of the inputs
There are some small uses for the base tech of generative ai; there's no good use case for chatgpt and its many crapsack clones.
You're a fucking idiot. Chatgpt knows nothing. By design, it cannot know or care if any of what it says is true. It is already in use to replace the purchase of stock images and by design it plagiarizes art of all kinds.
EastEnders is, literally, modern Shakespeare.
Have a day!
AI isn't about stealing artists' work. Any art produced by AI will by definition be derivative & lack humanity. If that's your art you're in trouble anyway
I agree with you on Google as a product, but that's not really what I meant.
Search engine quality is actively degrading as a result of AI integration. It's not just people doing things incorrectly or choosing the wrong tool for the task. The AI tool itself ruins the UX and garbles semantics.
As an aside, you sound like you might prefer https://startpage.com Google/Bing backend, but strips out AI and personalised /targeted ads and trackers.
Garbage in garbage out
If you've ever done real research, you end up learning so much more than you set out to if you just get an answer to your initial question.