My biggest fear of AI is our willingness to encode our biases, and prejudices, into a machine while allowing it to make decisions without accountability.
Comments
Log in with your Bluesky account to leave a comment
It's also not new. The same ethical debates exist for standard ML. Biased unsupervised (pun intended) ML algorithms embedded in healthcare, justice, or insurance systems are causing harm. AI systems will be even more obscure and allow people to feign ingnorance when it goes wrong.
I completely agree that deploying AI Responsibly is a big challenge and should be done slowly and with all the right checks and balances in place. But as with any piece of software, I'm curious what you think about the possibility of iterative improvement. Even if v now has issues, vnext could fix!
It's interesting. I have not seen a single thing about 'AI' that alarms me. The concerns you have are real concerns that should be addressed, also. I think that the entire hype will explode before harm can be done with encoded biases, but I may still lack understanding of what the tech is doing.🤷♀️
we've been doing that since before AI (your car's physical gearbox encodes a lot of assumptions and design thinking, just not in code you can examine as Eric Horvitz once put it to me); for all the downsides of automation we are also having the conversation about bias in tech a *lot* more now
Agree 100%. Biases are super tricky since all data has biases built into it by design. I still believe we can do better when it comes to building representative datasets. The annoying thing is that synthetic data is more of a thing now. Which is designed off of biased data.
It’s an extension of the “data-driven” mindset. The “data” being only things that can easily be collected as metrics, meanwhile all the meaningful stuff and the stuff that keeps us going is made to disappear.
Indeed… the stupidity of tech bros and AI enthusiasts without any knowledge nor understanding of impact and decision-making systems should not be left selling irresponsible tools.
Any software or tool is JUST A TOOL and should not make decisions !
My biggest concern is that with so much AI-generated content, LLMs will eventually consume their own stuff. Where will human curiosity, invention, creativity come from? Will we get better at generating ideas or will AI cause our unique abilities to atrophy, like writing did to our long-term memory.
Makes it all the more crucial for the developing organization to have a good culture: http://melconway.com/Home/Conways_Law.html, since one’s work tends to reflect their own traits eventually.
Hi! Could you provide an example of such willingness in action? I wonder what do you think of the safety guardrails and different tools such as fairlearn or deepchecks
Comments
Reminds me of the work that these women have been doing in the space to raise awareness.
https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/
So I believe, this time as well we will make the right choice or find a way to make it right 😉
Any software or tool is JUST A TOOL and should not make decisions !
https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
I'm afraid of what happens if we don't succeed in divorcing technological advancement from the exclusive service of extreme profit extraction.
I was just reading about a transcription tool used in hospitals that is prone to making up stuff.
https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14