Social media is polluting society. Moderation alone won’t fix the problem

Estimated read time 4 min read


Moderation (whether automated or human) can potentially work for what we call “acute” harms: those caused directly by individual pieces of content. But we need this new approach because there are also a host of “structural” problems—issues such as discrimination, reductions in mental health, and declining civic trust—that manifest in broad ways across the product rather than through any individual piece of content. A famous example of this kind of structural issue is Facebook’s 2012 “emotional contagion” experiment, which showed that users’ affect (their mood as measured by their behavior on the platform) shifted measurably depending on which version of the product they were exposed to. 

In the blowback that ensued after the results became public, Facebook (now Meta) ended this type of deliberate experimentation. But just because they stopped measuring such effects does not mean product decisions don’t continue to have them.

Structural problems are direct outcomes of product choices. Product managers at technology companies like Facebook, YouTube, and TikTok are incentivized to focus overwhelmingly on maximizing time and engagement on the platforms. And experimentation is still very much alive there: almost every product change is deployed to small test audiences via randomized controlled trials. To assess progress, companies implement rigorous management processes to foster their central missions (known as Objectives and Key Results, or OKRs), even using these outcomes to determine bonuses and promotions. The responsibility for addressing the consequences of product decisions is often placed on other teams that are usually downstream and have less authority to address root causes. Those teams are generally capable of responding to acute harms—but often cannot address problems caused by the products themselves.

With attention and focus, this same product development structure could be turned to the question of societal harms. Consider Frances Haugen’s congressional testimony last year, along with media revelations about Facebook’s alleged impact on the mental health of teens. Facebook responded to criticism by explaining that it had studied whether teens felt that the product had a negative effect on their mental health and whether that perception caused them to use the product less, and not whether the product actually had a detrimental effect. While the response may have addressed that particular controversy, it illustrated that a study aiming directly at the question of mental health—rather than its impact on user engagement—would not be a big stretch. 

Incorporating evaluations of systemic harm won’t be easy. We would have to sort out what we can actually measure rigorously and systematically, what we would require of companies, and what issues to prioritize in any such assessments. 

Companies could implement protocols themselves, but their financial interests too often run counter to meaningful limitations on product development and growth. That reality is a standard case for regulation that operates on behalf of the public. Whether through a new legal mandate from the Federal Trade Commission or harm mitigation guidelines from a new governmental agency, the regulator’s job would be to work with technology companies’ product development teams to design implementable protocols measurable during the course of product development to assess meaningful signals of harm. 

That approach may sound cumbersome, but adding these types of protocols should be straightforward for the largest companies (the only ones to which regulation should apply), because they have already built randomized controlled trials into their development process to measure their efficacy. The more time-consuming and complex part would be defining the standards; the actual execution of the testing would not require regulatory participation at all. It would only require asking diagnostic questions alongside normal growth-related questions and then making that data accessible to external reviewers. Our forthcoming paper at the 2022 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization will explain this procedure in more detail and outline how it could effectively be established.

When products that reach tens of millions are tested for their ability to boost engagement, companies would need to ensure that those products—at least in aggregate—also abide by a “don’t make the problem worse” principle. Over time, more aggressive standards could be established to roll back existing effects of already-approved products.



Source link

You May Also Like

More From Author