I'd love to read it. Force v. Facebook and the Kristof article on PornHub have always confused me: at what point does an algorithm go from automated moderation to something akin to arguing a ransom letter of magazine letters isn't original speech?
Comments
Log in with your Bluesky account to leave a comment
I just don't get how Meta can have a "more money / less money" engagement knob that their business model relies on, and yet they're completely immune from the impacts of turning it.
A bookstore decides to stock books on anarchy, revolution, etc. Further, the staff recommends some titles and tells customers if they liked Book A, they’ll like Book B.
Someone keys a Tesla, and it turns out they shopped there. Should the bookstore be punished for their algorithmic recommendations?
In the case of FB, YT, X, BS, etc., they “stock” millions of “books” per day, and their “recommendations” are human-free, just matching keywords/tags. If you think this somehow makes them more liable than a human who made a knowing decision to say “read this!”, I dunno what to say.
Yeah, that analogy is clear that if someone asks for legal content and you give them exactly what they ask for you have no liability for what they do with it.
Now, if you're running a store where any random person can drop off their zine, you grab a keyword from every paragraph, decide what it's about, and recommend the things based on those keywords, pushing extra hard the ones with the best profit margins...I'm actually not sure what liability is there.
In the unlikely even the content is actually illegal, then, if you’re explicitly informed, you remove it. But very little content is illegal. Speech must be found to be defamatory by a court. A simple claim is not generally sifficient to claim liabilit.y.
I spent the early 2000s on SomethingAwful. I have a lot of tolerance for letting people say bad and wrong things and for hosting such speech. First Amendment.
We can say that §230 doesn't protect illegal speech but both of those situations seem to hinge on a lack of actual knowledge or intent...but businesses profiting off the results.
It's clearer to me why a website shouldn't be responsible for simple misinformation even algorithmically promoted.
Also, I have zero desire to be blocked :-) I pushed this line of thinking too hard on Twitter years ago (I was pissed at Meta at the time during COVID for promoting bullshit and downplaying actual information) and that was the result. I enjoy reading you.
Comments
Someone keys a Tesla, and it turns out they shopped there. Should the bookstore be punished for their algorithmic recommendations?
In the case of FB, YT, X, BS, etc., they “stock” millions of “books” per day, and their “recommendations” are human-free, just matching keywords/tags. If you think this somehow makes them more liable than a human who made a knowing decision to say “read this!”, I dunno what to say.
1/
In the unlikely even the content is actually illegal, then, if you’re explicitly informed, you remove it. But very little content is illegal. Speech must be found to be defamatory by a court. A simple claim is not generally sifficient to claim liabilit.y.
It's clearer to me why a website shouldn't be responsible for simple misinformation even algorithmically promoted.