According to a study by Anjana Susarla, professor of Information Systems at the University of Michigan, the functioning of YouTube’s algorithms makes the video platform via streaming contribute to the spread of fake news related to health. That was the focus of Susarla’s research, but that does not mean that YouTube’s harmful behavior is limited to this type of subject.
Engagement with poor criteria
What you look for and consume on YouTube determines the type of content the platform will suggest to keep you engaged. The problem is that, if you ended up with a video full of data of dubious origin and clicked on a second or third, even out of pure curiosity, the artificial intelligence of the platform will continue showing more and more content with possible misleading information.
This is exactly what has happened in these pandemic times, in which at least a quarter of the videos related to covid-19 do not contain a valid scientific basis. According to the study by Susarla, the most engaged content is usually the one that presents the information in a simpler way and is often the most mistaken.
Videos from trusted sources are often full of technical terms and expressions little used in everyday life, so they end up not attracting public attention, making them less recommended by the algorithm. It works more or less like a domino effect: as the less reliable materials have more accessible language and go straight to the point, they are more successful on the platform, even though they are a minority.
The problem is so serious that the flood of misinformation about the new coronavirus on the Internet has led the World Health Organization (WHO) to declare the existence of an “infodemia”, related to the ability of false news to spread in society, similar to a virus contamination.
What YouTube and other similar platforms could do to mitigate the spread of fake news, in addition to the current measure of suggesting verified channels in search results, is to treat users’ complaints more relevantly.