Zoom, one of the most controversial applications of recent times, announced that it uses various tools such as artificial intelligence and machine learning to prevent nudity on the platform. It is curious how the company will manage this process.
Zoom, the online conference application that has increased the popularity of people in their homes following the coronavirus epidemic, has been the target of criticism arrows with security and privacy deficits over the past few weeks. So much so that the practice was banned in schools across the US due to concerns about security.
After all, Zoom, which has become the most controversial application of recent times, plans to get rid of adult content. Indeed, the application’s service contract stated that sexually explicit material or images, such as ‘nudity, violence, and pornography’, were prohibited. You guessed that it is not possible to achieve this in the application, which is claimed to be protected by ‘end-to-end encryption’ method.
Zoom uses artificial intelligence to detect users who violate company policies:
Cory Doctorow, who recently published a blog post, said that Zoom application uses artificial intelligence to detect and prevent nudity. At this point, a more reasonable method does not come to mind for detecting the content of a video broadcast where conversations are encrypted end-to-end.
“We encourage users to report suspected violations of our policies, and we use a variety of tools, including machine learning, to proactively identify accounts that may have infringed,” said a Zoom spokesperson on the topic.
Apart from Zoom, many other sites and apps, such as Instagram and Facebook, use artificial intelligence and machine learning tools to track posts. We have seen many times in previous examples how inconsistent and erroneous this method may be.
We will see how zoom works in artificial intelligence and machine learning in the coming days. It is really a question of what kind of policy the company will follow in this process.