tl;dr There are (whether we like them or not) some accurate/sensible things in Zuckerberg’s video/Kaplan’s post (e.g. the articulation of some of the challenges, etc.,). Unfortunately, they have been deployed in service of what is essentially an act of kowtowing to political changes in the US, and throwing others under a bus. The brunt of which the rest of us will bear. This ‘great pandering’ also does a disservice to important conversations we need to have.
Update on 2025-01-08 In retrospect, I think calling it a ‘great pandering’ was a bit much, and I should have just called it pandering (regular pandering?). I am going to leave it on there for a few days, in case someone wants to call me out on it, and then edit the title after that. 1 End of update
In a post titled “More speech, fewer mistakes”, and a video featuring Mark Zuckerberg, Meta announced a number of changes to its content moderation approach. There’s more here than just changes to the fact-checking programme, though that seems to be the most tangible one, since many of the others pertain to actions taken in the background (writing/interpretation of polices, demotion of content, etc.,)
Some of the articulation of the challenges was sensible, accurate, once you abstract them to some core points:
There are two more points to consider that weren’t really articulated in my reading, but I am adding for context.
And so, they are going to make some changes. TechPolicy.Press summarises them:
Specific policy changes announced include:
- Eliminating fact-checkers in the US and replacing them with a “community notes” system similar to X (formerly Twitter);
- “Simplifying” content policies by removing certain restrictions on topics like immigration and gender;
- Changing enforcement approach for policy violations:
- Focusing automated filters only on illegal and high-severity violations;
- Requiring user reports before taking action on lower-severity violations;
- Increasing the confidence threshold required before removing content;
- Reintroducing civic and political content into recommendation systems on Facebook, Instagram, and Threads;
- Relocating trust and safety and content moderation teams from California to Texas. “This will help remove the concern that biased employees are overly censoring content,” Zuckerberg wrote on Threads.
Taken at face-value, not all of these seem undesirable, and touch upon some important conversations that we are having / need to have such as:
I cannot capture all the nuance associated with their justification/reasoning and changes here. I have attempted to do this, to the extent that I can, by annotating:
What I haven’t mentioned so far is the not-at-all-subtle subtext of kissing-the-ring, and throwing others under a bus that is running through this. This changes a lot. So, even if some of the articulation of challenges is sensible, the motivations and outcomes are both highly suspect. And, to add to it, the effects of this act of kowtowing to changes in the U.S. will have ripples (not good ones) across the world. They also carry within them the consistent theme of Meta bending before power. It has also, probably, done a great disservice to important conversations we need to have by misusing them in service of a ‘great pandering’.
I’m leaving it on, to own that I initially made this choice, but I want to remove it to reflect that I reconsidered that choice/framing. I don’t think I am changing anything directionally with this, but let me know if you disagree↩︎
This is different from saying they do not have responsibility. The challenge is that we haven’t been able to arrive at a consensus of what that is/should look like.↩︎