AND WE'RE HATCHED!
ROOST launches at the #ParisAIActionSummit, bringing together tech companies and philanthropies to deliver free, open-source safety tools. Join us π https://roost.tools
ROOST launches at the #ParisAIActionSummit, bringing together tech companies and philanthropies to deliver free, open-source safety tools. Join us π https://roost.tools
Comments
how???
Ew
What if we didn't do this
https://bsky.app/profile/rahaeli.bsky.social/post/3lhvbynzmdc2k
I'm trying really hard not to make a terrible analogy here.
y'all need help.
I assure you this site's head of T&S, who's openly into kids, wouldn't be doing this if it was to protect anyone from child porn. It's for a mill, not for moderation. They want deepfakes.
Just say no to letting these foxes into the henhouse.
There is noting good about this organization.
https://glaze.cs.uchicago.edu/
People have really just been fucking poisoned they can't think straight when they see the word "AI"
The small tech company in question is an absolute mystery with no data to present how it works.
Can you blame them?
All of it
Period
not every ai is fucking generative.
generative ai is the problem, not ai in itself
still having skepticism on Google is a good mindset but looking into things comes a long way and saves you from looking like a dummy when the emotions have faded away
CSAM, yes it absolutely needs to be moderated with oppressive force. Number of concerns come to mind.
no regression models without a license!
FUCK OFF
People are triggered by the letters AI without understanding the difference in means and use, but don't let that discourage you from helping out.
ππ
Go away
it's a csam scanner, NOT generative ai. why are you getting mad at a csam scanner JUST because it uses ai? do you seriously want human moderators to traumatize themselves by watching csam?
This was tossed like the usual flaming bag of dog crap out of Aaron's otherwise stone fortress of solitude where he banned a user for revealing he liked porn bots at work and has been on call to protect a bigot.
I was mostly blaming the Roost people for bad communication; there's another comment I made directly at them. But that doesn't absolve others from critical thinking. If they're upset at Aaron, yell at HIM? Find out what stuff is first?
Forgive the common consumer on this one and direct it where the blame should be imo.
Also, ai has been a buzzword in content moderation for the last decade or so.
https://en.wikipedia.org/wiki/DeepDream
I guess launching it at an AI event was⦠well. How would you like the backlash that comes with the buzz?
https://bsky.app/profile/roost-tools.bsky.social/post/3lhveqagcms2b
i know people are scared from fuck ass tech bros pushing AI stuff onto people but please think about this. if AI isn't handling reports, who else will?
Is this supposed to be AI mods for my online game? Is it moderation *for AI things*? What is "safety" exactly and who are we keeping safe from what?
I don't really understand sock puppet detection/bot exclusion as part of "safety", but I don't work in the space so maybe that's really obvious?
https://huggingface.co/blog/yjernite/roost-launch
and @mozilla.org too!
https://blog.mozilla.org/en/mozilla/ai/roost-launch-ai-safety-tools-nonprofit/
The writing is on the wall for this, folks. Don't give in to the sunk cost fallacy.
That's actually part of why ROOST makes me angry, because it's wasting the time and resources of people who could be making something meaningful instead.
Fuck off and fuck yβall
it's just machine learning which has been done for years in every industry; they're only saying AI since that instantly triples any company's stock price nowadays
Get rid of it. NOW.
There is toilets for it. Go in toilets and eat your AI shit in private.
What has Bluesky become?
Join #Nostr, we're friendly
Would you try to destroy screen readers, because they use tts, a form of ai?
Its the only way to be sure.
And who is this "we"? No company name, no directors, no staff, no address. Just a lot of company logos.
Is it true that this project has received funding from Google, Discord, OpenAI, and Roblox?
https://roost.tools/partnerships
bye!
HOWEVER I also think that there's a different kind of AI that can/should be possible: one that's for the general public and from the general public
https://glaze.cs.uchicago.edu/
This is not scary job-stealing tech, this is basically the only thing we can utilize to cut down on the traumatic content moderators have to sort through.
Poor marketing from Roost is 100% responsible here.
Branding and shoving "AI" down our throats, while it's certainly just another CNN image recognition model (yuck at the training data)...
While machine learning is indeed "AI", the timing of the launch and marketing stinks.
These tools have to be constantly refined and improved, it's an arms race. I don't think the launch nor marketing of these vital tools should have to pay any mind to the anti-ai crowd.
They don't make generative AI, its not their fault ppl don't care to learn the difference.
It's about labelling everything as "AI" because it sells ads and products, and timing the launch of a product in the middle of an "AI summit".
I'm not Anti AI, i'm anti dumb marketing and bad startups.
Roost is a product, and AI is the buzzword that decision-makers like. If they don't use it, they are leaving money on the table.
Why would they do that?
If it means a bunch of children reflexively scream at you, well, the people building this have seen worse.
Yet has no links to any βopen sourceβ resources or any resources of any kind. Literally reads like Lorem Ipsum mixed with AI buzzwords presentation.
Fuck off and get out.
What kind of guards rails are there to keep the Artists, Writers and minority creators that made Bluesky a better place from getting false flagged? Are the data sets trained from properly licensed work or masses of stolen content? Will there be human reviews of π§΅
Basically in what way is this supposed to help all the smaller creators and margins folks that made Bluesky not a π§΅
/π§΅
https://glaze.cs.uchicago.edu/
Appreciate your time and giving me this post. Cheers.
He dropped this announcement with the same amount of buzzwords as a techbro with zero explanation. Just opened the curtain and said 'ta-da! AI~ OOOOO~'
On bright side, since itβs now a βBsky rebellionβ it must be good for everyone else?
I wish more people volunteered to mod, at least weβd have enough voices and experiences to familiarize moderation - like driving a car vs flying a plane.
Maybe both?
Just so we're clear:
White rich men privileges.
1. How many exobytes of child rape and exploitation do you think it will take to make the AI actually?
2. How do you plan to legally store your vast library of child porn?
3. Why are Matt Gaetz and Jared from subway involved in this?
We just had to go through similar on twitter- our works being stolen to train AI.
Can you assure us otherwise?
Irl is the only real 'safe' place atm, imo
linking to HMA since they're great
https://github.com/facebook/ThreatExchange/tree/main/hasher-matcher-actioner
Sorry, subliminal projections of current world events.
We didn't know of this persons behavior at the time when the council was setup, and the council was disbanded due to a lack of activity around the same time as chatter of this person came about.
https://about.iftas.org/2025/02/06/funding-challenges-and-the-future-of-our-work/
https://protectchildren.ca/en/resources-research/hany-farid-photodna/
https://www.rescue-lab.org/
TVEC and NCII use the threat exchange algorithms
Facebook uses AI to automatically moderate content and I've gotten shares suspended for being "spam" or "harmful content" when it was just articles from the likes of PCGamer or Tomshardware.
So frankly if you want AI, then fuck off from Bluesky and go back to Twitter and Facebook. It sounds like you're much better suited for those sites.
They have no info about what it was trained on or if it will train on content fed into it without permission, and no clear way to fix false positives.
You just saw the word "AI" and didn't understand that it's a broad marketing term, and not always used accurately.
I donβt have trust in any company that is also using automation strictly for their means of security since tumblr and YouTube, and that they nuke everything, especially things in relation to queer media and Iβm fuckin tired of the excuse of doing something good-
The only sin here is trying to capitalize on a buzzword.
This isn't the same as generative AI. This kind of tech has been around for over a decade and isn't new.
1/4
2/4
3/4
We absolutely should be condemning products that cause a net negative to society. Content theft, carbon emissions, ect..
However this is not one of them.
4/4
thank you for the information, I'm very glad to see this + I'll do my own homework on this as well
I appreciate you having the patience to educate!
You had a chance to contribute to the well-being of society, and yet you chose something that not only kicks people out of work but will just become outdated and unwanted in time.
Such a waste...
Or as detectives, so they don't need to look at murder scenes.
Thankfully, there are those of us with the balls to stomach it and fight back against the darkest aspects of what humans will always wrought
When you rely on a machine to keep what you find uncomfortable at bay, then its inevitable failure will always be a far worse outcome.
So if that's the case, then just fucking pay people who can do the job in the first place. Don't bring in tech that can fail or cause more damage than good.
This isn't new, it's been the standard for like a decade now. It works. You don't know what you're talking about.
Because I suspect the answer is horrifying.
Even if trained with CSAM (better call the lawyers before you do that), I'd suspect they'd use synthetic CSAM rather than real.
If you meant *patches*, that would be possible, but it'd be constrained to a specific architecture, & the images could be restored.
I'm not sure whether it is trivial to reverse or whether that constitutes a problem (after all that would be nonstandard usage pattern out of TOS/SOP etc)
https://www.npr.org/2018/11/12/667118322/the-cleaners-looks-at-who-cleans-up-the-internets-toxic-content
1. They *have* a trust and safety team, but 'just scroll until you see some CSAM' is not a feasible option.
2. They are legally required, esp. by California and Europe, to stop CSAM. It's not a game, or option.
Also, human annotation is declining in use in AI. Scale is getting hit: https://www.inc.com/sam-blum/scale-ai-lays-off-workers-via-email-with-no-warning.html
Also, for the record:
this one deals less w the content but more w other aspects of how the job works
Iβd think even having the training data would bring in law enforcement scrutiny-some of it on the engineers. How do you manage all the issues?
Itβs not βAIβ in the the art stealing, crypto-bro way. Itβs applied science which allows human moderators to not see traumatic material every day at work.
But this technology specifically- built for moderation, is a godsend to mental health in the workplace for moderators.
Because so far automated moderation's track record is fucking abysmal and the entire concept is just an excuse to not pay moderators and/or provide cover to further oppress the already-oppressed
https://www.youtube.com/watch?v=vdit20KRyZ8