If you open the Google Transparency report website to read the YouTube community standards apps, you can read the following notice:
“In response to the COVID-19 situation, we have taken steps to protect our outside staff and reduce face-to-face staff in offices. As a result, and on a temporary basis, we are using more technology to carry out some of the tasks that flesh and blood reviewers normally do, so we are removing more content that may not violate our policies. This influences some of the metrics in this report and will likely continue to influence the metrics going forward. For the latest news on how we are addressing the COVID-19 situation, visit g.co/yt-covid19 ”.
A measure taken by the pandemic that we continue to live, according to the report, between the months of April and June – the time we spent in confinement – YouTube eliminated more than 11.4 million videos for violating the platform’s rules, a figure higher than usual when compared to the same period in 2019, in which the service removed around 9 million videos. Why? Because moderation was automatic, through AI.
Automatic moderators Vs human moderators
YouTube began to give its Artificial Intelligence system greater autonomy to prevent platform users from watching violent, hateful videos or other harmful or misinformation content. And this has caused more videos to be deleted than it should. For this reason, the Google service has made the decision to leave moderation in human hands again.
YouTube’s Product Director, Neal Mohan, told the Financial Times that they have brought back the team of human moderators, because although technology can quickly remove clearly harmful content, their capabilities have a limit. In this sense, the algorithms were able to identify videos that could be potentially harmful, but on several occasions they were not so good at deciding what should be removed.
Accurate but not efficient
More than 50% of the 11.4 million videos deleted from the platform during the second quarter of the year were removed without a single real YouTube user having seen them, while more than 80% were removed with fewer than ten views. The ability of the automated moderation system is fast and accurate, but not as efficient as it should be. Therefore, YouTube has reversed its decision.
“That’s where our trained human reviewers come in,” says Mohan, adding that they used the videos featured by AI and then “made decisions that tend to be more nuanced, especially on topics like hate speech or medical misinformation or harassment. “.