there are obviously many problems with AI but probably the biggest is the way its evangelists are determined to use it for things it has absolutely no business being used for
Comments
Log in with your Bluesky account to leave a comment
The intended purpose of AI is being a smokescreen - everyone is just following some recipe that came out of a computer, never mind the original intent. This seems to give the putschists a feeling of invulnerability, a shield from consequences of their actions.
In my twenty years of working with content management systems (including work at the VA), I have to say that "all content should just live in the codebase" has to be quite possibly the stupidest thing I have ever heard.
Unless your user story is "as a completely unethical shadow government employee, I want to have all the content in the codebase so that I can make changes at the whims of my capricious and violent dictator bosses." (Which is a *different* stupid altogether.)
Do you mean evangelists with zero empathy, zero experience with large complex systems, an inability to relate to people, and who are socially shall we say diminished? Those evangelists?
can AI speed up the generation of boilerplate code and sometimes make useful suggestions? sure, as long as you make sure to understand what it’s suggesting before plugging it in
can you just ask it to rewrite the entire fucking VA? no! why would anyone think you could do that?!
I just used an LLM to look up some obscure stuff about PDF generated with XSL, and it actually helped. So it's a helpful tool. I'm all about right tool for the job. At the end of the day, it's still just a computer that can only do what it's told
Not to mention that they are fucking with COBOL before this crap is built, assuming they can take one of the world’s largest database and migrate, test and launch in what, under six months? 🤣
If you don't understand the problem domain you will not know what to use and not use from the AI answers. it will often be some really wild stuff in there.
You even see this shit with regular non-generative machine learning for pattern matching. There's something about statistics at scale that really breaks people's intuitions, kind of like quantum mechanics.
Love of LLMs also coincides with a general disdain for workers. Many senior management-types have worked in an administrative capacity for so long they’re out of touch.
They can’t understand why things take so long/cost so much and they think LLMs are a solution to their lazy employees.
I don't think these dudes have ever actually built a working system that does anything important. I look at these statements they make, and as someone with 30 years building enterprise software, it honestly scares me
It’s the shallow aesthetic obsession of fascists. It can look and act like it’s superintelligent so it must be; who cares if it’s a facade that breaks down under the least bit of scrutiny?
Wasn't there a peck of gobshites from some minor copse of academe who recently made the category error of claiming they'd taught an LLM to feel pain when what they'd actually taught it to do was to output "ow" when they input "smack",as though that wasn't something you could do in two lines o'BASIC?
That right there is the biggest actual "AI risk". Not the tech itself, (which frankly could be quite useful if it is used within defined contexts and not being managed by sociopathic techbros).
This is absolutely new grad engineer brain at work. They usually have no concept of how complex actual systems in business and government are because they are used to well defined school projects that can be completed in 4 mo.
Unleashing them w/o someone with 5+ years of exp in charge is deadly.
AI will confidently give you a bunch of answers if you ask it to, and it’s very important that you have enough domain knowledge to know which ones are useful and which ones are hallucinated bullshit
It might be useful for the AI to actually spit out it's confidence in an answer ( that's not definitive of course since it's probabilistic but it would hopefully remind people that the AI can be wrong )
The problem is that LLMs don't have a confidence. They have no conception of true or false. It would be like your autocomplete function on your phone providing confidence in the option you use being ocrrect.
my husband will use ai occasionally and will spend an hour tinkering with it to get something that kind of resembles the code he needs, then more time fixing it so it actually works.
it takes a lot of time and isn’t worth using on smaller problems / shorter code.
It’s also so very easy to trip up on very basic tasks. Counting letters in words, doing simple sudoku or monograms etc show just how glaring the holes are.
In a way, it's telling - these people are so used to demanding that their subordinates tell them exactly what they want to hear. As a result, any old algorithm that does this is indistinguishable from a real intellect as far as they're concerned.
This has always been the danger of AI since Terminator and War Games and probably before. Just because you have AI doesn't mean you should give it a direct connection to every sharp knife in the drawer.
Comments
"Computer says NO."
The responses are a good way to determine who has worked near records management and who has not.
Yes, but not in the way you would guess.
can you just ask it to rewrite the entire fucking VA? no! why would anyone think you could do that?!
The worldview of the typical AI booster is no different in terms of callousness to human consequences than the typical Sackler
they don’t understand what they’ve built at all and it’s so, so dangerous to treat a machine that cannot think as one that can
They can’t understand why things take so long/cost so much and they think LLMs are a solution to their lazy employees.
Unleashing them w/o someone with 5+ years of exp in charge is deadly.
https://bsky.app/profile/i-rohl.bsky.social/post/3kwnqw5iday22
https://bsky.app/profile/i-rohl.bsky.social/post/3jzutv4yw6m2j
AI will confidently give you a bunch of answers if you ask it to, and it’s very important that you have enough domain knowledge to know which ones are useful and which ones are hallucinated bullshit
it takes a lot of time and isn’t worth using on smaller problems / shorter code.