I built a tool for myself tonight that I would never have gotten done in the amount of free time that I’ve got available, and it’ll help me do more writing. It’s built with code & data I trained and gathered consensually, myself, on my own machine. This is why I’m saying we need better AI critique.
Reposted from
Kevin M. Kruse
No, they actually do suck and aren’t useful.
Comments
For data management alone, natural language AI tools have a lot of value:"take this huge document and extract all of the instances of ATSM with the code following it"
"Real Dumb" is what users who depend on them become so that's what I call 'em.
Their knee jerks into "plagiarism machine" type responses which pushes backwards against progress?
At the same time we have a chance now to try and stop a rather bad thing before it's entrenched and much harder to fix
but if youre using a publicly available LLM as a base, id argue that is almost certainly not the case.
This literally is the discussion you want to have.
Possible rubrics: environmental impact, dubiousness of initial training corpus
The thing that keeps me away from ai, aside from just not having an immediate application, is the fact that it *is* tainted, though. Very few people would ever have the necessary resources to say, train & test a new base model without intellectual property violation.
Using this as an example isn't good criticism, either.
(Using my own data)
if they weren't useful they'd be self-limiting, like cryptocurrency
https://bsky.app/profile/kbsutt.bsky.social/post/3lnjr6v465c2w
Tech idiots are ending our world leading science and medical research while saying, somehow, w/o the research on which it’d be based that it’ll cure diseases and replace thousands laid off.
https://podcasts.apple.com/us/podcast/a-p-i-resilience/id1516437015?i=1000700908652
They are the IPAs of machine learning
Occasionally you’ll find something compelling, but high suspicion is not invalid
The second biggest problem is fraud and scams at ridiculously massive scale
(And IPAs are the best tasting beers)
https://www.techpolicy.press/minors-are-on-the-frontlines-of-the-sexual-deepfake-epidemic-heres-why-thats-a-problem/
It used to be socially acceptable to drink drive. We are in that phase.
You can distill models and many new models now are actually distilled models. At the least technical level, you can use an off-the-shelf LLM with a large context window...
The advantage would be things like semantic search, natural language searches, etc.
I mean, yeah, it doesn't help that the tool is primarily being advertised for one of its least useful qualities, but still.
used it to make some tables right after, which prolly took as long as it would have if i did it myself in Excel but prettier
If you use it as a calculator it's not awful!
One helped me make a chrome browser extension to make YT scrubbing functional on long videos in like 20 min. Never made an extension before
a) you inappropriately quoted a non-argument based on totally different reasoning
b) good for you! proves nothing about general utility
c) it is true and will remain true that human beings need a LOT less fucking "tech" in their lives than they're currently relying on
You don't come up with your own encryption algorithm or password management solution, because it will always be less secure and take far longer to write.
AI coding, used correctly, just makes it faster to put together those low-level pieces.
A competent developer can use an AI assistant to build things better and faster. An incompetent one will end up with something worse. I've seen both happen.
I'm not saying anybody has to use it either, and I'm not saying it's necessarily worth all the other bullshit around them.
90% of my job is dependency and risk management, you don't need to teach me about reinventing the wheel ;)
The OP was about building a tool, writing code, so that's the context I'm looking at it from.
It's also the only thing I've found them *really* useful for.
- nostalgia: OK sometimes, watch out
- LLMs: not even once
I’d love a more ethical alternative that was *also* highly accessible.
"I Made a Tool!" isn't very helpful. What tool? "Show your work" used to be a Thing.
GitHub, or go home... ¯\_(ツ)_/¯
ollama run gemma3 "Do I have enough tide pods to serve 6 people? $(cat groceries.txt)"
I'm not making a repo for that. Naive to think it warrants one
Why can’t we just be happy that people built a tool that helps them in a way that’s meaningful to them, without the “HaVeN’t YoU hEaRd Of PeEr ReViEw?!?”
I’ve built lots of silly demos, proofs on concept, and tools using AI.
It’s fun, it’s fast and effective for what I need it to be.
All Vibes. ¯\_(ツ)_/¯
Why are you now upset people want proof?
There are lots of questions around ethics and how how to manage it. Can we have those convos?
There’s nothing wrong with rejecting tech, but rejecting *discussion* of that tech is a problem.
I find it to be a direct assault on fundamental aspects of my field -- a need for factual accuracy, a desire for above-average writing that doesn't recycle cliches, and most important, original work that cuts *against* the grain rather than reifying it.
And completely missing the point if used for art.
And in that spirit please read any "you" in this thread as "AI advocate in general" and not YOU you.
But I have to deal with the bullshit machine every day now -- watching students skills atrophy as they opt for this, getting emails from hucksters who will NOT take a polite no for an answer ...
And if you all need a "better critique" then I hope you find that somewhere else. Because I'm sick of dealing with it, and the assurances that it's Great Actually don't remotely ring true in my experience.
https://koomen.dev/essays/horseless-carriages/
And I share @kevinmkruse.bsky.social’s loathing of it being spammed at us from all sides.
But agree we should be identifying who is the source of harmful effects so we don’t lose the potential beneficial ones.
The last three emails I got were from small startups, so no, it's not BIG TECH that's the whole problem.
I don't give a shit who controls it. I do not want it. Why is this so hard to understand?
Do whateveryou want with it. I want nothing to do with it.
Why can't you just take no for an answer?
And yeah, we're allowed not to want it. Go play with it yourself, have fun. Please stop pushing it in my face nonstop.
The last three very hard sells I got came from two academics and a start up.
Big, small, it’s all annoying. Just leave me alone and stop trying to force this garbage on us.
Machines for machine audiences, humans for human audiences.
There are positive aspects of most things that are actually bad for society.
There are YT channels that utilize it heavily for providing subtitles and 5 different language audio tracks
Other than that I can’t think of any good uses of AI
iOS autocorrect is still an absolute sh*tshow for instance
Closest I can think of is things like https://jillianbommarito.com/wikimedia-says-no-llm-training/