Google’s YouTube has reverted to using more human moderators to vet harmful content after the machines it relied on during lockdown proved to be overzealous censors of its video platform.
When some of YouTube’s 10,000-strong team filtering content were “put offline” by the pandemic, YouTube gave its machine systems greater autonomy to stop users seeing hate speech, violence or other forms of harmful content or misinformation.
But Neal Mohan, YouTube’s chief product officer, told the Financial Times that one of the results of reducing human oversight was a jump in the number of videos removed, including a significant proportion that broke no rules. Almost 11m were taken down in the second quarter between April and June, double the usual rate. “Even 11m is a very, very small, tiny fraction of the overall videos on YouTube . . . but it was a larger number than in the past,” he said.
The acknowledgment sheds light on the crucial relationship between the human moderators and artificial intelligence systems, who vet the material flowing into the internet’s biggest platform for user-generated videos. Amid widespread anti-racism protests and a polarising US election campaign, social media groups have come under increasing pressure to better police their platforms for toxic content. In particular, YouTube, Facebook and Twitter have been updating their policies and technology to stem the growing tide of election-related misinformation, and to prevent hate groups from stoking racial tensions and inciting violence.