Most reporting on AI examines worst-case systems deployed under the guise of efficiency. But what would a good faith effort at Ethical AI look like? For two years, we’ve been looking over the shoulder of a city trying to do things differently.
Comments
Log in with your Bluesky account to leave a comment
Amsterdam spent years trying to build a fair algorithm to detect welfare fraud. It hired consultants, ran bias audits, consulted citizens & even contacted @lighthousereports.com. In the end it failed — we wanted to understand why.
With @technologyreview.com and @trouw.nl we tell the story of an ambitious attempt at building an ethical system, one built on the “Responsible AI” playbook. Why did it collapse and what does this mean for AI coming to a gov near you.
Our investigation tackles a simple but urgent question: Can Responsible AI that makes sensitive decisions about people’s lives actually work in the real world?
Amsterdam spoke to academic experts. Audited for bias. Reweighted data. Spents 1000s of € on consultants. Chose explainable models over black boxes. It even consulted welfare recipients for feedback.
And it invited @lighthousereports.com to watch as it unfolded. They gave us full access to their system, including code, ML models and comprehensive data.
Comments