Profile avatar
josephseering.bsky.social
Assistant Prof at KAIST School of Computing. HCI, AI, T&S.
39 posts 449 followers 73 following
Getting Started
Active Commenter

I had an interesting conversation a couple of years ago about whether ~AI-generated content creators should be handled the same as human content creators from a T&S perspective. At the time, it was an academic conversation, but it seems to be increasingly relevant now.

Generally speaking, if community moderators want a feature enough to build it themselves, it's often worth considering for wider deployment. Many of the most powerful user-facing moderation tools on platforms started as third party concepts built by users themselves to meet their specific needs.

This is a great feature idea, and FWIW very similar features are used in community moderation where moderators can leave notes about particular users to remind themselves and other mods. Mostly this is done via third party tools, but some first party too. No reason it wouldn't work on bsky.

I wonder whether there was any serious discussion about not implementing this. It may seem like a no-brainer, but there's a serious discussion to be had about value added vs increased safety costs.

Side note, I was trying not to get too much into the details of that specific case, but off-service conduct policies are really interesting. I think people don't often realize how much policies are shaped by the technical capacity to enforce them.

The question of whether to boot Singal is one of what will be an increasingly large number of decisions that Bluesky as an organization really does not want to make. It's important to remember that Bluesky was created with an ethos directly opposed to central authority making these decisions.

Riding in a taxi this morning, the driver was listening to a popular radio program that teaches English through references to news articles and current events. The segment ended by teaching the words "martial law", "declare", and "lift."

So I really like a lot about what this paper is doing, and I hope we can see more of this.

Proud to announce the first successful MS defense from my lab! Yubin Choi presented on her work studying users' perceptions of privacy issues when disclosing health information to LLMs. She is applying to PhD programs in CS/HCI this cycle, so keep an eye out for her application!