Unless your actual goal is to fuck that thing up then
Do not
Apply generative "AI"
To anything
That needs to be
Exactly
And precisely
Correct.
Fucking stop it.
Do not
Apply generative "AI"
To anything
That needs to be
Exactly
And precisely
Correct.
Fucking stop it.
Reposted from
makena kelly
SCOOP: DOGE wants to rebuild SSA's codebase in months, risking benefits and system collapse, sources tell me.
The plan is to migrate all systems off COBOL quickly which would likely require the use of generative AI.
www.wired.com/story/doge-r...
The plan is to migrate all systems off COBOL quickly which would likely require the use of generative AI.
www.wired.com/story/doge-r...
Comments
They have no idea that every single person has to get the correct amount.
At a certain time.
Based on their specific data and *laws*.
Estimates or omissions generated by AI aren’t acceptable.
Disaster incoming!
It could well be that they simply want to break it.
Not a pretty picture.
It's just one possible scenario.
I'm curious what documentation will remain.
https://tinyurl.com/bdaavcm2
Also I think these people are very happily aggressive in their ignorance because of never experiencing consequences.
https://bsky.app/profile/miniver.bsky.social/post/3llh2y3w3a22w
(Though I'm also fairly sure at this point that fucking things up is their goal, but)
Or the point is to destroy it.
While they do intend to screw over a lot of beneficiaries, they're using AI because they (wrongly) believe it works
Sometime when coding you have to work around issues in...uh... non-standard ways.
We honestly CAN'T reproduce the embedded knowledge.
I think maybe Elon doesn’t.
Maybe.
LLMs can be helpful, no doubt, but they are NOT trustworthy.
If we had unit test on the COBOL code you could at least make a credible effort but.... legacy COBOL + unit test = DNE.