Facebook: Artificial Intelligence Technology Is Insufficient At Interventing Objectionable Content

0

Facebook: Analysis reports for the company published by Facebook were examined. According to reports, it has become clear that artificial intelligence technology is insufficient in interfering with objectionable content on the platform. Facebook executives continued to work to keep the company’s chronic problems, users of hate speech and violent content away from the platform with artificial intelligence technologies.

According to the analysis of the report, it was revealed that Facebook’s artificial intelligence failed to detect violent content and interfere with the content.

Artificial intelligence provides 2% success

Facebook uses artificial intelligence technology to respond to content that violates the company’s rules. Scanning algorithms, called classifiers, form the basis of the company’s content moderation system. Facebook algorithms can identify inappropriate posts, preventing their spread or removing them. However, according to reports published by the company, it was claimed that artificial intelligence technology was not successful in identifying inappropriate content. For example, it was seen that he considered a video of a car wash as inappropriate, while he did not interfere with a video of a violent car accident.

The Wall Street Journal analyzed the documents published by Facebook and the company’s own rules. According to the documents examined, it was determined that Facebook was not sufficient to distinguish offensive or dangerous content and to remove it from the platform. According to Facebook data, 0.6% of content that violated the rules was removed in March, when artificial intelligence was not used. With current AI technology, it’s estimated that only 2% of violators on the platform have their posts removed. Officials admitted that it is unlikely that this rate will rise above 10% anytime soon, unless there is a major improvement according to the data so far.