I really fucking wish the AI people who don't understand consent, privacy, risk mitigation, or compassionately answering user concerns about any of the former would stop talking to the public because it's already goddamn hard enough to get people to understand not all machine learning is this shit
Comments
Seems doing things in any sort of ethical way is asking too much.
Machine learning is also why you can go into the camera roll on your phone, type cat and see your photos with a cat.
They do know/understand "consent",
in various forms - they just don't give a fraggle,
because they get to make money, get glory etc.
And don't forget, few actually have the resources for legal,
as it costs a fortune in money/time.
It's all, 1000000000% intentional.
Most of those involved, went through Uni.
Those that did CS/NLP/AI etc. - all learned with corpora,
that was paid for.
They know it's all theft.
they darn well knew - when they made the initial performance claims about answering X, or doing Y "better",
that it hadn't, that it was down to luck, or they'd bolted on task specific code to handle it (it was Never purely LLM).
The lied, as well as stole.
Knowingly.
was the lack of safety measures.
They didn't check for input bias.
They didn't check for output bias.
Some intentionally put those biases in!
No limits on output to avoid misuse etc.
None of it was accidental.
100% agreement!
It implies that the product by design can generate such content and it might not be caught by these filters. Why can the product generate it??? 😖
You don't release it to the public, throw on a classifier for exclusion, and hope that the problem doesn't come up very often.
What you’re suggesting is next to impossible.
But if wishes were fishes...
(Like how strange is it so much of our world is shaped as if the only data sets of the past were homogeneously white herero manly mensome)
…this is why the motto of my Human Factors class is “it’s about people, dammit”