Hated that video? YouTube’s algorithm might push you another just like it.


Mozilla researchers analyzed seven months of YouTube activity from over 20,000 participants to evaluate four ways that YouTube says people can “tune their recommendations”—hitting Dislike, Not interested, Remove from history, or Don’t recommend this channel. They wanted to see how effective these controls really are. 

Every participant installed a browser extension that added a Stop recommending button to the top of every YouTube video they saw, plus those in their sidebar. Hitting it triggered one of the four algorithm-tuning responses every time.

Dozens of research assistants then eyeballed those rejected videos to see how closely they resembled tens of thousands of subsequent recommendations from YouTube to the same users. They found that YouTube’s controls have a “negligible” effect on the recommendations participants received. Over the seven months, one rejected video spawned, on average, about 115 bad recommendations—videos that closely resembled the ones participants had already told YouTube they didn’t want to see.

Prior research indicates that YouTube’s practice of recommending videos you’ll likely agree with and rewarding controversial content can harden people’s views and lead them toward political radicalization. The platform has also repeatedly come under fire for promoting sexually explicit or suggestive videos of children—pushing content that violated its own policies to virality. Following scrutiny, YouTube has pledged to crack down on hate speech, better enforce its guidelines, and not use its recommendation algorithm to promote “borderline” content.

Yet the study found that content that seemed to violate YouTube’s own policies was still being actively recommended to users even after they’d sent negative feedback.

Hitting Dislike, the most visible way to provide negative feedback, stops only 12% of bad recommendations; Not interested stops just 11%. YouTube advertises both options as ways to tune its algorithm. 

Elena Hernandez, a YouTube spokesperson, says, “Our controls do not filter out entire topics or viewpoints, as this could have negative effects for viewers, like creating echo chambers.” Hernandez also says Mozilla’s report doesn’t take into account how YouTube’s algorithm actually works. But that is something no one outside of YouTube really knows, given the algorithm’s billions of inputs and the company’s limited transparency. Mozilla’s study tries to peer into that black box to better understand its outputs.



Source link

Leave a Reply

Your email address will not be published.