The AI-based YouTube systems used as moderators used to censor too much, that’s why humans return
In March, YouTube stated that it would rely more on machine learning systems to flag and remove content that violated its policies on topics such as hate speech and misinformation, but this week the statement changed and it was explained that the use of the artificial intelligence for moderation purposes had led to a significant increase in incorrect video removal.
About 11 million videos were removed from YouTube between April and June due to AI, that is, about double the usual rate, of which around 320 thousand of these deletions were appealed and half of the videos were restored appealed.
For his part, Neal Mohan, YouTube’s director of products, indicated that one of the decisions they made at the beginning of the pandemic was to prefer to be wrong because AI is not that precise, but these mistakes were oriented in that the users were protected, even though that resulted in a greater number of deleted videos.
Youtube will return to human moderators
This admission of failure is remarkable, because all the major online social platforms, from Twitter to Facebook to YouTube, have come under increasing pressure to deal with the spread of misleading and hateful content on their sites, and they all agree on that algorithmic and automated filters are the ones that can help deal with the immense number of posts on their platforms.
However, experts in artificial intelligence and moderation have expressed skepticism about these claims, because judging whether a video on any topic may contain subtle nods to racist beliefs can be challenging for a human being, and computers lack our ability to understand the exact cultural context and nuances of these statements.
In this regard, automated systems can recognize the most obvious offenders, which is certainly very useful, but humans are still needed for the most accurate judgment decisions.