2. Implementing a chronological feed still requires making design decisions (e.g., does a user see their own posts if users they follow share them?), which are expressive of what kind of service the platform operators want to provide.
3. Opening up platforms to liability for *prioritizing content* means that, e.g., Reddit (and Lemmy instances) are liable for showing comments sorted by upvotes.
4. The choice to prioritize certain content is like a newspaper's picking what to put on A1 above the fold, i.e., traditional publishing.
6. "Algorithm bad!" is \the same mindset that declares anything other than a chronological feed "addictive", which means that *deliberately anti-addictive* features, like an algorithm that counts how many posts you make in rapid succession and inserts a suggestion to go for a walk, are verboten!
I'd love to read it. Force v. Facebook and the Kristof article on PornHub have always confused me: at what point does an algorithm go from automated moderation to something akin to arguing a ransom letter of magazine letters isn't original speech?
I just don't get how Meta can have a "more money / less money" engagement knob that their business model relies on, and yet they're completely immune from the impacts of turning it.
A bookstore decides to stock books on anarchy, revolution, etc. Further, the staff recommends some titles and tells customers if they liked Book A, they’ll like Book B.
Someone keys a Tesla, and it turns out they shopped there. Should the bookstore be punished for their algorithmic recommendations?
In the case of FB, YT, X, BS, etc., they “stock” millions of “books” per day, and their “recommendations” are human-free, just matching keywords/tags. If you think this somehow makes them more liable than a human who made a knowing decision to say “read this!”, I dunno what to say.
Yeah, that analogy is clear that if someone asks for legal content and you give them exactly what they ask for you have no liability for what they do with it.
Now, if you're running a store where any random person can drop off their zine, you grab a keyword from every paragraph, decide what it's about, and recommend the things based on those keywords, pushing extra hard the ones with the best profit margins...I'm actually not sure what liability is there.
We can say that §230 doesn't protect illegal speech but both of those situations seem to hinge on a lack of actual knowledge or intent...but businesses profiting off the results.
It's clearer to me why a website shouldn't be responsible for simple misinformation even algorithmically promoted.
Also, I have zero desire to be blocked :-) I pushed this line of thinking too hard on Twitter years ago (I was pissed at Meta at the time during COVID for promoting bullshit and downplaying actual information) and that was the result. I enjoy reading you.
Comments
2. Implementing a chronological feed still requires making design decisions (e.g., does a user see their own posts if users they follow share them?), which are expressive of what kind of service the platform operators want to provide.
4. The choice to prioritize certain content is like a newspaper's picking what to put on A1 above the fold, i.e., traditional publishing.
And on, and on…
These people have made up a demon called “the algorithm” and like all demons pre-1974, it lacks any meaningful definition.
Someone keys a Tesla, and it turns out they shopped there. Should the bookstore be punished for their algorithmic recommendations?
In the case of FB, YT, X, BS, etc., they “stock” millions of “books” per day, and their “recommendations” are human-free, just matching keywords/tags. If you think this somehow makes them more liable than a human who made a knowing decision to say “read this!”, I dunno what to say.
1/
It's clearer to me why a website shouldn't be responsible for simple misinformation even algorithmically promoted.