Facebook, which recently introduced the Transparency Center, announced its transparency report for the first quarter of 2021. Facebook, one of the most popular social media applications, recently introduced the Transparency Center.
Facebook stated that the Transparency Center will manage company policies, how and when it will be updated, will use manpower as well as machine technology to examine and manage the removal of unwanted content in the application.
Transparency report for the first quarter of this year shared by Facebook
Guy Rosen, Facebook division vice president, announced the Community Standards Implementation Report, managed by Facebook in the first quarter of 2021. He shared a total of 22 policies, 12 on Facebook and 10 on Instagram to let the public know how they implement the policies in the popular social media application.
Rosen also shared the rates of harmful content users were exposed to. Accordingly, while there is a nudity of 0.03 percent to 0.04 percent on both Facebook and Instagram; There was 0.05 percent to 0.06 percent hate speech, violence and uncensored content on Facebook. He stated on Facebook that AI (artificial intelligence) technology is the biggest factor in their success in reducing their rate from 23.6 percent in 2017 to today.
In the continuation of the report, it was stated that approximately 8.8 million bullying, 9.8 million harassment content and 25.2 million hate content were the main issues Facebook took precautions in the first quarter of the year. On Instagram, on the other hand, 5.5 million bullying content, 324 thousand 500 hate content and 6.3 million harassment content are among the main issues that have been taken precautions.
Misinformation about the pandemic was also examined and removed.
Mark Fiore from the Facebook team shared the Intellectual Property Transparency report, which includes the part from the second half of 2020 to the present day, which includes removed content related to fraud and steps related to piracy on Facebook.
According to Fiore, 99.7 percent of all fraud-related removals were proactive. That means they were removed before being reported by anyone. This percentage of numbers includes the removal of approximately 335 million pieces of suspicious fake content. Approximately 2.5 million suspicious content was removed for Instagram, while 82.8 percent of the transactions made were proactive.
When we come to copyright transactions, about 10 million content was removed from Facebook, while artificial intelligence was used in 78 percent of these transactions. While about 2 million content was removed on Instagram, artificial intelligence helped these processes by 59 percent.
Facebook vice president and chief legal counsel Chris Sonderby said in his government information report on the app that government requests for user data have increased from 173,000 to 191,000, which is an overall increase of 10 percent in the past six months.
Leading countries in collecting user data include the USA, India, Germany, France, Brazil and the United Kingdom.