Profile avatar
josephseering.bsky.social
Assistant Prof at KAIST School of Computing. HCI, AI, T&S.
39 posts 449 followers 73 following
Getting Started
Active Commenter
comment in response to post
There are probably a zillion faculty teaching intro UX/HCI classes who would love to have you for a guest lecture, if that sounds like a good warm-up.
comment in response to post
Yes, likely destructive, but also somewhat unsurprising. Given the extreme popularity of platforms like c.ai, it's expected that Facebook would try to capitalize on that trend. (Nevermind the legitimate concerns about the impact of c.ai on young users)
comment in response to post
If there were a good way to consistently/efficiently take off-service conduct into account, far more platforms would do it, but it's really hard to build a good process for that. As a comparison case, Twitch has a pretty interesting off-service conduct policy: safety.twitch.tv/s/article/Co...
comment in response to post
Oh, and I should probably clarify, my personal vote is 100% for banning Singal. I don't care if I never see or interact with his content. I don't want him on a site I'm on.
comment in response to post
(There's also long-term questions about revenue models and resources etc etc, but that's another thread)
comment in response to post
With all of this said, I can't even imagine how overwhelmed the team must be. They're way too small a team to be handling this, even with the sudden hiring, and they've somehow managed to launch a crazy number of pretty great features in a short time, but I don't know how sustainable that is.
comment in response to post
That's not to say that I think Bluesky should ditch the user control angle. I think that part is great, and it actually seems to be spurring a lot of innovation in mod tooling. It just needs to end up being like 30% of the solution, rather than the 90% that was initially hoped for.
comment in response to post
So, what happens when users want a T&S approach that's basically X-but-not-shitty, while Bluesky leadership (or a subset thereof) wants to do the whole decentralized thing with filters? So far, the users' vision actually seems to be winning more and more over time, and I suspect that will continue.
comment in response to post
With that said, it has of course been satisfying to see Bluesky T&S banning a variety of agitators over the past few weeks, but those are just the most visible examples. Hordes more people like them have already taken root here -- far more than can be individually whac-a-mole'd by T&S teams.
comment in response to post
but people are increasingly going to realize that there's tons of stuff on Bluesky that they want completely gone from the platform, and no filters can solve that problem.
comment in response to post
It has managed to _seem_ like a community because users still have relatively small networks and the platform doesn't rely heavily on algorithmic amplification of content you didn't specifically choose to see,
comment in response to post
You can see this quite clearly in how people talk about Bluesky as a "community," and seem to think that it's a place for people like them, but Bluesky is not a community. It is 25+ million people with an extremely diverse range of behaviors and views.
comment in response to post
As it turns out, users don't see filters as a direct substitute for centralized T&S actions. They still care if offensive/problematic content exists on the site, even if they personally never see that content. There's lots of good reasons for this. I support this feeling. They _should_ care.
comment in response to post
(And I say this having studied community moderation for almost a decade, and having great respect for the volunteers who moderate communities. They're a much more important piece of the broader T&S puzzle than they often get credit for, but they cannot be the only piece.)
comment in response to post
I empathize with the desire to pass on the responsibility of making these decisions to users. T&S is impossible even when you're well-resourced, and you're almost never well-resourced. Unfortunately, this core idea of delegating T&S power to users can only go so far.
comment in response to post
As a researcher of moderation tooling, I think Bluesky is doing some pretty great stuff, but I'm not especially bullish on this ethos of decentralization, at least in the way it has manifested here.
comment in response to post
The core premise of Bluesky, as embodied in its design, was that users should be in control of what they see. This was in some sense supposed to substitute for having a central authority making decisions about what to allow and what not to allow.
comment in response to post
Bluesky's leadership has adapted very well to users' demands for more proactive T&S involvement, and I have to give them credit for that. They learned much faster than most other platform leadership teams in the past. The reluctance is still there though, as evidenced by this quoted post.
comment in response to post
Thank you to those who've reached out. The situation seems to be stable for the moment, but we'll all be watching the news carefully.
comment in response to post
As I'm reflecting on this, I'm wondering how we can encourage more collaborations like this while also making sure to leave space for contributions in moderation tool design from labs without access to N=100,000. Both are valuable and important.
comment in response to post
This work had an advantage in that it had ready access to users on a scale that most academic research labs do not (though, to be fair, I'm sure there were institutional challenges within Reddit in getting approval for this), so the evidence of success is going to be more quantitatively persuasive.
comment in response to post
There have only been a handful of papers in the last decade that proposed new moderation tools and tested them in the wild with real users and communities. Speaking from experience, I can say that this is an extremely difficult type of research to do, and the review process is often not kind to it.
comment in response to post
(Volunteer moderators have built many custom solutions to these problems for their own communities, which are worth learning from, but I digress)
comment in response to post
The need for this type of tooling is clear to anyone who's worked in this space. Moderators (both professional and volunteer) are often overwhelmed by the massive queues that they have to filter through, which take away resources from the more important decisions that require contextual expertise.
comment in response to post
I've also seen versions of this proposed and tested (not in the wild) with LLMs giving feedback to users during writing, but I actually really like the use of regex here.
comment in response to post
There have been a few versions of this general concept deployed on other platforms, but none that I'm aware of that put control into the hands of users and communities to specify warnings/filters during the post writing process. I think that's a great iteration on the idea.
comment in response to post
The paradigm is an extension of ongoing conversations about mod tool design which emphasize the need to build tooling that is more proactive in encouraging positive behavior rather than focusing purely on getting better at identifying/removing problematic content that has already been posted.
comment in response to post
I was fortunate to have the opportunity to co-advise Yubin with Prof. Meeyoung Cha, and we were also grateful for feedback from committee member @juhokim.bsky.social!
comment in response to post
Congratulations! Excited to hear more as you're able to share.
comment in response to post
As I'm looking around, it seems like Bluesky is pretty structurally different from most platforms that have had active mod tool developer communities in the past. There are fair comparisons with Twitter blocklists, but it's hard to see the usual community-driven tooling taking off as well here.