This is a link-supported, unedited draft of an opinion piece that appeared in TheSouthFirst on 2025-01-12. The page is archived here
META recently announced changes to its approach to content moderation, including but not limited to ending its fact-checking programme in the US, replacing it with a crowd-sourced ‘Community Notes’ like system, simplification of content policies, limiting the use of automated filtering to specific ‘high-severity’ categories and an intent to rely on user reporting for others, moving trust and safety away from California, and scaling up—based on personalisation, the amount of political content one may see in their feeds (list).
Much of the initial conversation has focused on the changes to the fact-checking programme. Yet, it is other changes that have potentially harder-to-forecast and more long-term effects, with varying implications for different parts of the world. It is important to consider the full set of changes from the lenses of the principles and politics they espouse, as well as the (missing) specifics. But first, let’s establish some baseline context about the interplay between content moderation, society and politics in a few short paragraphs.
Aside from certain categories, such as spam, nudity, child sexual abuse material, terrorism-related content, platforms have historically been reluctant to take complex decisions related to content. Their reasons may be based on principles (a ‘stated’ commitment to free speech), self-interest (remaining apolitical to avoid blowback), or a combination of these. Often, this distinction matters less than we think. As many platforms, including META, took a more interventionist approach, especially since the COVID-19 outbreak, they found themselves having to ‘referee’ situations they were ill-equipped for. Complexity of issues, increasing levels of polarisation, rise in populist politics, presence of deep-seated/underlying issues played a large part.
Even with investments in internal capabilities, many of these would always remain intractable. And, when decisions were taken, they were either biased (whether intentionally or not) or perceived as stemming from bias, and used as rallying cries to mobilise further. Moreover, they made a lot of mistakes, often lacking local nuance/expertise in human decision-making, and automated system design. Sometimes, missing violations untilafter they were pointed out, and at other times, overzealously enforcing policies leading to the suppression of genuine political speech or dissent. These are further compounded by scale and the presence of malign actors. This ‘refereeing’ resulted in platforms accruing (even) more power over speech, and its reach. For these reasons, such attempts were fraught.
To complicate matters further, we have not yet determined how to separate this power we think they ought not to have, from the responsibility we think they ought to have, even as we should recognise that entrenched social problems cannot be solved at the layer of content moderation alone. Nor have we determined how to scale media for speech distribution that are resistant to executive discretion of both the C-Suite, and the state varieties.
With that context, let’s look at the proposed changes again based on principles. A post titled “More speech, fewer mistakes”, detailing the changes cited biases of experts, mission-creep and increasing complexity of their own systems, policies and processes, and META itself being prone to making mistakes.
The articulation of the challenges associated directly with META was mostly reasonable. One could, conceivably, look at the changes and recast them in terms of principles such as choosing to do less but attempting to do it better which, in theory, would ease decision-making and reduce errors (simplifying policies, limited automated filtering to categories such terrorism, child sexual exploitation, drugs, fraud and scams), devolving power towards users (using user reports as a signal for enforcement actions, instituting a ‘Community Notes’ approach), and allowing more forms speech (’Community Notes are a form of speech, surfacing political content in feeds). Since the context section reminds us that platforms are, at best, reluctant and imperfect actors, and at worst, bad-faith actors, some of the changes proposed seem directionally desirable, at least, at face value. This would not be a wholly inaccurate analysis, but it would be an incomplete one because…
The political posturing and signalling is unmissable. Coupled with other reported changes such as winding down DEI programmes, or removing Pride/Nonbinary Messenger themes, this calls for both suspicion and skepticism. Attempting to, wholesale, cast external fact-checkers and internal teams as biased, and reframe their own content moderation efforts, systems and processes as ‘censorship’ is disappointing, and unsurprisingly inline with the views of the incoming administration in the United States. It is also untrue to imply that fact-checking organisations were ‘censoring’, as posts flagged by them were labelled rather than removed.
This new-found acknowledgment of its 10-20% error rate for automated filtering stands in contrast to years of touting the breadth and high-action rates of its automated filters.
Changes to its now-rechristened ‘Hateful Conduct’ policy, published later the same day, bear hallmarks of accommodating/pacifying a conservative platform —making specific changes on subjects like gender, immigration,protections for groups, to name a few. This resulted in the Electronic Frontier Foundation, a digital civil liberties group, significantly dialling back the initial cautious optimism it had expressed.
We also need to look at the specifics or details we do not have.
Meta provided no evidence whatsoever to support its claims of bias of fact-checkers, or about the working of the programme in general. Analysis suggests that it produced 10-14 fact-checks per day in the U.S. In a polarised, fast-paced information ecosystem, one can wonder if this was sufficient, but there is little to argue against the necessity of sustaining organisations that seek to contextualise developments. Jacob Mchangama, from ‘The Future of Free Speech’, lists studies where crowdsourced fact-checks were beneficial. Meta has not explained why the 2 systems could not run in parallel. Neither has the organisation attempted to evaluate a Community Notes system. Even Twitter/X piloted Community Notes as ‘Birdwatch’ before a wider rollout. Fact-checking organisations in India, and around the world are, rightly, concerned.
More content policy changes will invariably follow, and will likely take cues from political positions. These will go hand in hand with the changes to enforcement mechanisms. To recap, the use of automated filtering will be restricted to high-severity categories, and user-reports will be used for others.
It is difficult to forecast to what extent, and whether, if at all, problematic kinds of speech will benefit from these changes, though many predict that they will. A lack transparency has made it difficult to establish how effective the current mechanisms are in a global and non-english context. In fact, political speech has sometimes been restricted and explained away as a an error or glitch.
Twitter/X’s experience may be analogous but not identical since Meta has, so far, not announced a paring down of Trust and Safety operations or large scale reversals of past enforcement actions.
User reports are unlikely to be a high quality signal as Dave Willner, former head of Content Policy at Facebook states. Plus, it is unclear how meaningfully responsive Meta will be.
The reintroduction of political content as recommended content may play out in unexpected ways, as past attempts to tweak recommender systems have shown, since these are non-deterministic. What they recommend interacts with political events, such as the reported 300% rise in ‘inflammatory prevalence’ in India, coinciding with the anti-CAA-NRC protests and COVID-19 lockdowns.
The effects of these changes will ripple across the world in a few ways. Politically, they send the signal that Meta may cave to power, or use its modified approach to justify taking minimal actions. There will be question marks about the long-term sustainability to institutional fact-checking. And the specifics of execution will determine what kinds of speech proliferates, and will have varying outcomes in different parts of the world.
The entire episode also reminds about the need to make progress on important conversations about how much control/discretion we think platforms (including but not limited to META) should exercise over speech, if they are, at best, reluctant and imperfect actors, and at worst, bad-faith actors. About how we can wrest control from them, without turning control over to states, which have their own incentives to seek control. And to what extent, by overtly focusing on speech, we are attempting to respond to ‘wicked (social) problems’ by intervening at the most visible layer instead of the most appropriate layer?