New from me @gabrielgeiger.bsky.social + Justin-Casimir Braun:
Amsterdam believed that it could build a #predictiveAI for welfare fraud that would ALSO be fair, unbiased, & a positive case study for #ResponsibleAI. It didn't work.
Our deep dive why: https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/
Amsterdam believed that it could build a #predictiveAI for welfare fraud that would ALSO be fair, unbiased, & a positive case study for #ResponsibleAI. It didn't work.
Our deep dive why: https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/
Comments
https://bsky.app/profile/lighthousereports.com/post/3lrdjsudvtc2m
And wicked problems are not the same as hard problems (cure cancer) where you know what the solution is.
AI models used in a system like welfare should never be about 100% removing human accountability but rather about enhancing outcomes in tandem with humans.
Seems unwise to predetermine a benchmark for how many applicants should or should not be flagged for investigation.
But assuming they had a valid reason to drive towards that goal, then their system design was flawed.
Fact is, the technology is too new and complicated to assume anybody knows precisely what they're doing with it - Even the AI companies admit they don't fully understand it yet.
If that was the goal, then the project was doomed from the jump, right?
You can't remove a human accountability from a system like welfare.
And if that's how that AI system was designed, then it set up for failure.
So it being scrapped isn't the big W anti-AI zealots want it to be.
They're really just as bad as the deceitful clowns who manipulatively oversell the technology.