Hey, this is standard (non genAI) moderation tools. These are the only way to systematically address moderation at scale without 1) requiring some sort of payment or 2) allowing individual mods free reign to shape discussion like on reddit
The claim they make later on that, because the software they're using is open source, evil people will reverse-engineer it into a CSAM generator is legitimately the viewpoint of someone whose understanding of technology comes from cartoons
I- what? That's utter nonsense. Generally with CSAM you're using images hashes specifically so nobody has to possess copies. And it being open source doesn't mean anything of the kind. Wtf
The other claim is that because it's open source then bad actors will know how to dodge moderation and therefore it's bad and useless which is just am incredible display of brainpower
Also it's not like this is some new and incredible way of detecting illegal content. People tech-savvy enough to find the holes in it aren't the ones being targeted by these kinds of moderation efforts because they're probably not posting illegal content publicly.
No dude. Gen stands for generative. Creating shit off prompts and feedback. Moderation ai has been around long before the genai nightmare and it’s literally just a program that checks for specific words and phrases.
People have really confused the difference between GenAI and plain machine learning AI. One is stealing content to spit out garbage and destroying the planet, the other is being trained to do a very hyper specific task to make a larger task easier.
I can't blame people in general for the confusion because companies themselves peddling that shit are fostering it by making AI into a marketing buzzword.
doesn't help that now everything tries to say it's powered by AI and I'm like "is this ok AI like Photoshop has had for years or trash AI like Photoshop has had for like two years"
or alternatively trash AI like facial recognition
or trash AI like facial recognition that also can't recognize black people because the training data didn't include any.
No that'd be the person chewing their bank out. The two times I've had that happen in my working adult life, I've had to fight for it tooth and nail against the institution
Yeah what a lot of machine learning is good at is 'smart interpolation' and that can be fairly unhelpful for something creative and novel, but when you want to do optimization and data is expensive to get, it can be fantastic and even environmentally friendly
Okay question, and this isn’t a gotcha I legit sometimes don’t know this stuff. Is this potentially taking a job away from someone who would be paid to moderate this? Or would they not pay someone to do it even if they weren’t automating this?
Another benefit to add onto the other comment--a fully human moderation staff requires users to report content in order for a mod to see it flagged. That means more time that potentially harmful content remains up, and more users get exposed to it before action can be taken.
I guess my question is then is it like oil rigging where it’s soul crushing work, but someone is doing it for a paycheck they wouldn’t otherwise have (not that it’s good that people need to take soul crushing jobs for cash but would less humans be employed)?
Generally speaking in free to access social media you don't really ever see nonML supported teams. Companies not using these kinds of tools often just do less overall moderation or rely on specifically unpaid volunteers who introduce their own biases a la reddit.
Just to add on, Facebook has a huge problem with their manual moderation teams burning out (putting it lightly) due to the illegal content that gets flagged. ML does eliminate a lot of that content that they are exposed to. I honestly wouldn't wish a job manually scanning for CSEM/CSAM on anyone.
Those jobs have legitimately ruined their lives. And while they would have to look at things manually sometimes, the less anyone is exposed to, the better.
It seems like the person who was freaking out about this being “ai” is perhaps not the most emotionally subdued individual, because yeah there’s a lot of nuance here
I don’t want anyone doing jobs that ruin their lives, obviously
What this generally does is streamline processes for somebody being paid so it's harder to bias their decisions by flooding/mass reporting a post while still moderating at scale. In my field every flag gets manual review.
this is a genuine, actual question and not trying to get internet points or something i swear
is this the type of AI moderation that causes issues like we see with fat peoples' midriffs/trans people posing the wrong way/etc being flagged as sexually explicit.
Okay so that's actually an interesting question. It *can* do that depending on how you fine tune the training and determine your dataset. They can do that pretty easily, but it's also not hugely difficult to mostly address. While you can't get every edge case guaranteed, you can get quite a few.
hey, thanks for the honest answer. it's something that's bugged me about AI moderation, but... well, at least it's not worse, so thanks for the info, much appreciated. (:
To lay out an overly simplified solution, you can essentially include said midriff pics in your training set as 'not sexually explicit' and that might help.
The main issue is you have to weigh how many false positives you'll accept and given how few trans people are included are included at all in that training set, these algos often have very low confidence due to that lack of training data
I have issues with Aaron at the Bluesky mod team, but when OP decides to close replies and only reply to people who agree with her…that kinda tells me what OP’s motivations are. 🙃
GenAI has poisoned the perception of the broader (and long existing) industry of AI technologies. Like nobody would get mad about spam filters unless you described them as AI tech
It’s not your fault. It’s the branding of Gen AI companies trying to capitalize on a common academic and industry term to add a touch of futurism to their marketing that is purposefully trying to create that association
These tools also usually just flag content for manual review so that any false positives are addressable. You're letting your legitimate concerns over genAI break into pathological fears of anything using advanced statistics. Fucking chill
God, this is nice to see. I've been muting so many words this week because the AI doomers have really started to hammer on stuff like ML because they can't differentiate it from the thing they hate.
The LLM stuff is often bad, especially the image generation. ML doing ML is what we actually want.
I've also kind of gotten tired of the term Gen AI. Lots of ML generates stuff without being the evil they fear. DLSS is generating pixels in real time thanks to ML and that's good. It's LLMs trained on stolen data for dystopian reasons that's bad.
Some years ago when I was still teaching in engineering I got to see the start of my department in switching coding to an app because the students didn't know how to use a folder tree anymore
like i've been saying, "ai" is merely a buzzword now. both to corporations and to people who don't actually know what to direct the necessary hate towards
this is not progressive. it's regressive. and we're regressing significantly
Listen, not for practical purposes so much as because it'd be a hilarious pun, I want to see somebody design a game using genAI to dictate the enemy behaviour
Comments
or trash AI like facial recognition that also can't recognize black people because the training data didn't include any.
I don’t want anyone doing jobs that ruin their lives, obviously
That said neither of those two is necessarily a bad thing, because ML solutions sure aren’t free. Nothing is free.
is this the type of AI moderation that causes issues like we see with fat peoples' midriffs/trans people posing the wrong way/etc being flagged as sexually explicit.
Welp.
But like you said, Gen AI has poisoned the well
https://bsky.app/profile/chaoticgaythey.bsky.social/post/3lhwujjoims2r
The LLM stuff is often bad, especially the image generation. ML doing ML is what we actually want.
Her response was “are you fucking joking?” lol.
this is not progressive. it's regressive. and we're regressing significantly
Old terms are also viable: 'Enemy Computer'. 'NPC Brain'. 'NPC Thinking'.
Thesauruses and dictionaries exist.
or smth, idk lol