Between 2018 and 2023, social media platforms like Twitter and Meta operated under the guise of responsible moderation—removing disinformation campaigns, publishing transparency reports, and projecting neutrality.
But as a 2024 study by Mugurtay, Kök, and Helwig revealed, this moderation was anything but neutral. Their analysis showed clear patterns: content takedowns correlated with countries’ political stability, democratic rankings, and alignment with U.S. interests.
Platforms, in effect, were conducting a form of digital diplomacy.
By 2025, even that flawed system has collapsed. Twitter, rebranded as X, and Meta have both dismantled key moderation frameworks, enabling an unchecked surge in hate speech, harassment, and state-aligned propaganda.
This article traces the arc from politically motivated moderation to algorithmic indifference, unpacking what happens when platforms stop pretending to protect the public square and start profiting from its erosion.
Comments
By 2025, even that flawed system has collapsed. Twitter, rebranded as X, and Meta have both dismantled key moderation frameworks, enabling an unchecked surge in hate speech, harassment, and state-aligned propaganda.