WARNING EVERYONE. BLUESKY IS TEAMING UP WITH A LARGE AI COMPANY. MAKE YOUR DISPLEASURE KNOWN. THEY ARE GOING TO TRY TO GO THE WAY OF TWITTER. GIVE THEM ZERO REST.
Nah. We make them realize how bad they fucked up by hitting them where it hurts. By tanking their marketability and being loud and refusing to back down.
I respect your concerns and suspicions here but this AI is the old "help detect cancer" type, not generative. The open source is actually OK, despite the subject matter. Not giving humans PTSD with this content that *must* be stopped is important.
Omg not fuckin aaron too like brother. you still have bandaids slapped over the last drama that aint doing shit. Like. Last i saw from @literallyjustadog.online she's still being ghosted and now this??? Like i commented on his post. I may self identify as a clown, but this is Joker shit
Very "Head of trust and SAFETY" to sell/allow the scraping of our information. So safe. much secure. very public figure. Where's my custom discord emojis ffs I need my EMOTIONS
the worst part is. this massive AI company he's partnering with? backed by OpenAI, Google, Roblox, and Discord? they deal with CSAM. massive, incomprehensible amounts of it. They have trained an AI on it. they are making this AI open source. Actively thwarting any uses it may of had.
Ok but is this generative AI? Bc machine learning covers more than just generative AI, and it's the generative shit that causes the issues we've seen on other platforms. It would definitely be good to know how they go about training the software, but AI as a non-generative tool is not inherently bad
I'm as against AI as the next artist but what is your alternative? are you volunteering to be a part of the human team who has to look at this kind of thing for 8 hours a day??
And, just in case, here's some wayback machine links to the roost site's partnerships page and their vague "what we do" that doesn't explain any of what they do. In case of changes/needed for additional screaming
(didn't want to include it directly on the quote post for safety reasons, but kudos for screaming about it because that's what sent me down the rabbit hole of "wait, what? and they're-oh. oh no."
Thank you, I've been having people trying to tell me that this open source code definitely can't be edited to make it do the opposite and I'm like "... code can be changed."
and people keep trying to gaslight me and go "no silly girl you can't know about this this is only for us tech people to know about"
honey I fixate on shit and I learn everything I can about it. if I'm this passionate about something, I know what the fuck I'm talking about. This is a no-win sitch.
Ugh. Yep. I've been on the frontlines against this generative AI stuff for the past couple years and I cannot even count the amount of times I've gotten "You're just an artist, you can't get it."
Tech. Artist. Kinda in the name. I don't know everything, but I know a fair amount.
I'm getting someone with all kinds of alt accounts replying at me right now, so that's... Fun. I'm still not gonna stop telling people what's happening, though.
you and me both. I'm still fucking baffled that they'd think handling CSAM WHILE BEING OPEN SOURCE IS A GOOD IDEA? They're giving those pedo scumbags the easiest way to avoid detection of CSAM???? Literally all they have to do is look at the code to avoid it... or make the AI replicate it.
I kinda felt like this was too good to be true since just about every company somehow ends up with the motivation to destroy their own product for profits nowadays.
GenAI, which is already known to be used to create CSAM, is not going to fix the goddamn problem, ESPECIALLY WHEN YOU MAKE THAT SHIT OPEN SOURCE. Anyone can fuck with open source shit, anyone can clone it. Saying otherwise is outright dishonesty.
I mean, you can't exactly *make* CSAM from AI, because the entire thing about CSAM is a child being hurt.
I'm. A little more concerned how they even train a model in the first place. There's only one way to do that. And it'll be *open-source*? what the actual fuck
There was a 60 minutes episode with Facebook contractors who sued FB for PTSD they got classifying images for their model. It’s sickening - the sheer number of images of violence, sex, CSAM, etc they had to look at all day, every day.
I say this sincerely knowing you mean well--but it doesn't, and it can be... very upsetting to CSA victims to hear their pain is at all similar to a computer spitting out an image.
If it's realistic enough, yeah, it would be illegal and horrific. But it wouldn't be CSAM specifically.
THIS AI COMPANY, ROOST, HAS GONE ON RECORD TO SAY THEY WILL BE DOING "OPEN SOURCE" CSAM PREVENTION. THAT MEANS THEIR MODEL WILL BE TRAINED ON CSAM, AND BECAUSE IT'S OPEN SOURCE, IT CAN BE RE-TRAINED TO REPLICATE IT. THIS IS ABSOLUTELY ESSENTIAL TO BE STOPPED.
On one hand I totally understand the need for alternate moderation because forcing real human beings to look at gore and csam all day has been tried and been disastrous but on the other hand, there's got to be a better way than training neural on the stuff to do it. False positives aside it's just
okay come on now
I get the fear around AI, but posting a literal getPost error as proof of suppression is just insincere or misinformed
I'm all against genAI and unethical scraping, but please slow down and reconsider, because not every "AI" is the bad kind, same as "cyber" isn't just "cybercrime"
I feel like it's a fair effort to use tech to help with moderation, alright? the alternative is forcing people to moderate it manually, which I'd be surprised if you'd prefer instead.
and what the hell would they use "CSAM AI generation" for anyway???
This is categorically false, please don’t spread misinformation like this, these tools have nothing to do with generative AI and can’t be used for that purpose
People will claim it's "physically impossible." ... I don't know how to explain this to you, but. Anyone can take this publicly available code, and see what it does to detect CSAM, and avoid it. Or, even worse, replicate it to instead produce it.
if you could somehow extract the data used to train a model off of its final weights (which is the part that all open source models make public) then meta would be having a pretty tough time in court right now from all the copyright suits after they extracted the training data used for llama
i don't mean to come off as an aibro because i'm pretty far from that and i'm also a bit skeptical of this roost effort but the part about being able to generate csam from a model meant to classify it spreads a lot of fud that can miseducate people about how all these kinds of classifiers work
also let's be 100% straightforward. they're backed by OpenAI and Google. And Roblox, and Discord. Roblox exploits kids for money on the daily. They will again.
Meta deserves a rough time in court for straight up stealing art. But yeah. Code is just that- code. It can be changed to do whatever you want it to do.
BE LOUD. BE ANGRY. MAKE THEM FUCKING REGRET CREATING THIS GODDAMNED WEBSITE. BRING DOWN THE VALUE OF THIS SITE. GET WEIRD. GET FREAKY. SCARE THE FUCKING AIBROS OFF.
The general concern around AI as it pertains to LLMs is definitely valid, and I personally agree that the proliferation of LLMs will get us nowhere. With that said though, content moderation absolutely requires AI tools (be it for CSAM detection or triaging of content for human review).
Yeah, it’s the numerous accounts of exploited workers that have gotten PTSD from viewing so much of humanity’s evil while content moderating that has me acknowledging AI specifically and only for this purpose can stop real harm for both workers and users
my issue isnt the concern that it relates to generative ai, my issue is how fallible ai is as a moderation tool. people get suspended and shadowbanned on twitter all the time for things getting falsely flagged by the ai moderation, and i dont want bluesky to become like that.
twitter was functionally moderated without the use of ai like this (its backed by openai, and google, and trained off of real csam content fed into it) for like two decades before musk took it over and openai came into existence, i dont think that is as much of a concern as you think it is.
might be a good idea to have moots you intend to reconnect with have your discord through DMs or some other platform just in case he decides to do a naz-errr muskrat and ban every account that opposes him
If you actually read it, they're teaming up with a machine learning company to do algorithmic moderation better. That is an entirely *good* thing. We sorely need that on bluesky, and seeing the generic marketing term "AI" and freaking out like this is very unreasonable.
Also the algorithmic moderation happens to be a way to moderate CSAM without having to collect a database of images or risk human moderation. That is an entirely good thing! And it's open source! No new training data needed!!!
The AI hate is starting to feel like a lot of the arguments against GMO foods. It's possible to hate how an industry is structured/see how capitalism ruined it without hating the underlying technology (which in both cases can be useful).
Hire PEOPLE for that. Don't let machines do people's job. Automating it will only lead to false positives and unjust bans or refusing to ban people who deserve it. This is a problem on every site that uses automated moderation. Using AI is even worse because it tries to use machine for human reason.
You don't understand how this works if you're saying that. The tool takes hashes from databases of Hashes stored by the feds and other companies and then can search bsky for existing CSAM. If the hash matches it, it sets a flag, and can delete it.
I actually work in an adjacent space. We use the models to flag things for manual review because you can't just have humans comb through every post on a free website. It's not logistically feasible. This helps prevent one individual enforce their personal opinions as is common with reddit moderation
G-d you people annoy me. In no way shape or form does your statement match reality. They're using traditional ML tools to flag content for review. This is open source! You can just look! But half of you would rather cry wolf on basic tools that are in no way connected to genAI other than using math.
Comments
https://bsky.app/profile/aaron.bsky.team/post/3lclbmzf4vc2l
More info here:
https://bsky.app/profile/roost-tools.bsky.social/post/3lhtspgi2hk2v
Made a whole ass thread primarily about the worst part of it, even screenshotted one of the posts where they made claims about it.
https://webcf.waybackmachine.org/web/20250210202015/https://roost.tools/#what-we-do
https://webcf.waybackmachine.org/web/20250211033608/https://roost.tools/partnerships
honey I fixate on shit and I learn everything I can about it. if I'm this passionate about something, I know what the fuck I'm talking about. This is a no-win sitch.
Tech. Artist. Kinda in the name. I don't know everything, but I know a fair amount.
https://bsky.app/profile/rahaeli.bsky.social/post/3lhvbynzmdc2k
Get on the Fediverse ASAP.
It sucks so much.
How do social medias keep thinking "oh the thing that made another platform go down the drain? Yeah we need that too"
I'm. A little more concerned how they even train a model in the first place. There's only one way to do that. And it'll be *open-source*? what the actual fuck
If it's realistic enough, yeah, it would be illegal and horrific. But it wouldn't be CSAM specifically.
I retract my previous comment. I think it just makes me uncomfortable, specifically.
This is a moderation tool. It's not scraping for genAI or LLM stuff
i'd prefer whichever method is more robust even if it isnt perfect. people will always try to cheat the system.
not sure of implications here...
I get the fear around AI, but posting a literal getPost error as proof of suppression is just insincere or misinformed
I'm all against genAI and unethical scraping, but please slow down and reconsider, because not every "AI" is the bad kind, same as "cyber" isn't just "cybercrime"
and what the hell would they use "CSAM AI generation" for anyway???
https://bsky.app/profile/hrath.bsky.social/post/3lhuelztklc2a