An important controversy over YouTube causing bias was marked by recent days. A recent study claims that the YouTube algorithm does not recommend users videos that cause people to become radicalized.
A new article on YouTube’s algorithm caused a reign of YouTube’s controversy that caused radicalization . The article, written by Mark Ledwich and Anna Zaitsev, claims that YouTube did not recommend similar videos after an extremist video. In an article that has not yet been evaluated in a peer-reviewed journal, he claimed that YouTube’s algorithm protects mainstream media channels against independent content.
Other experts working in this field responded to the article written by Leidwich and Zaitsev. Experts criticized the method of the article and said that the algorithm is just one of the few important factors to make such an assessment , data science alone cannot answer this question.
Sociologist Zeynep Tüfekçi, who first worked in the field of technology, stated that YouTube was involved in radicalization, with her article published in the New York Times. Tüfekçi said that YouTube’s videos are slowly directing users to more extreme content . Zeynep Tüfekçi wrote that videos about running lead to ultra-marathons, videos about vaccines, conspiracy theories, and videos like politics to videos that deny the Holocaust.
If the algorithms are not controlled, extremists will take over the media.
Former employee of YouTube, Guillaume Chaslot has dealt with Zeynep Tüfekçi’s arguments in detail. Chaslot says that YouTube suggestions are actually biased against conspiracy theories and videos that are fundamentally wrong, but the way people spend more time on the site is through them.
Maximizing watch time is the basis for YouTube’s algorithms. The fact that YouTube is not transparent in this regard as a company makes the fight against radicalization impossible. Without transparency, it is difficult to find ways to improve this situation.
YouTube is not very lonely about not being transparent. Many structures, from companies to government agencies, do not act transparently about their large systems . Many systems, from systems that allow children to be placed in schools to the determination of credit points, use machine learning algorithms. Generally, there is no explanation by institutions and companies about how these systems decide.
How the suggestion system works needs to be explained?
Machine learning systems are often large and complex . These systems are generally defined as black boxes, where information is entered and information and action arise. So trying to understand how the site works without knowing exactly how algorithms like YouTube’s suggestion system works is like trying to understand how a car works without opening the hood.
Companies and government agencies can be more transparent about the algorithms they use to prevent radicality and excess. There are two ways to achieve this transparency. First, the presentation of counterfactual explanations . The basic logic of the algorithm can be explained for counter-explanations without explaining the entire logic of the algorithm. A bank’s statement of determining the credit score, ” If you are over 18 years old and have no debt, your credit will be accepted ” may be a simple but effective definition for the operation of the system.
The second method is algorithm testing and control. Back-testing and continuous monitoring of algorithms can prevent false suggestions. Checking YouTube’s algorithm can reveal which videos the suggestion feature is primarily recommended for.
Counterfactual explanations or algorithm auditing are extremely important, although difficult and costly. Because the alternative that it confronts can produce worse results for humanity . If algorithms are not controlled and controlled, conspiracy theorists and extremists may gradually take over the media.