Between May 15 and June 15, Facebook took action against 2.5 million posts of “violent and graphic content” in the country, with 99.9% of complaints handled immediately, according to the social media giant’s monthly compliance report released on Friday. During this time, the company “actioned” roughly 30 million content, including posts, profiles, and pages, over 10 violation categories.
The new IT rules require significant digital platforms (with more than 5 million users) to post periodic reports of compliance each month with details about the complaints they receive and the necessary actions taken in accordance. It must also provide the number of particular communication connections or sections of information deleted or disabled by the intermediary by continuous monitoring using automated algorithms that should also be included in the report.
Between May 15 and June 15, Facebook took action on almost 30 million pieces of content across several categories such as spam (25 million), violent and graphic content (2.5 million), adult nudity, and sexual activity (1.8 million), and hate speech (311,000). Furthermore, the social media giant took action on content that was classified as bullying and harassment (118,000), suicide and self-harm (589,000), threatening organizations and individuals: terrorist activity (106,000), and dangerous organizations and individuals: organized hate (75,000).
Whereas Instagram only took action on roughly 2 million posts, across nine categories, including suicide and self-harm (699,000), violent and graphic content (668,000), adult nudity, and sexual activity (490,000), and bullying and harassment (108,000).
Facebook highlighted that flagged content might include posts, images, videos, or comments, with corrective action ranging from deletion to covering the content with an appropriate audience warning.
Facebook has invested constantly in technology, people, and processes over the years, a Facebook spokesperson said, to accomplish its objective of keeping users safe and secure online while also allowing them to express themselves on its platform freely. The US-based social media giant announced that its next report, including user complaints received and actions taken, will be published on 15 July.
The data related to WhatsApp, which is part of Facebook’s ecosystem of apps, will be included in the July 15 report. Google and Koo, a homemade social media app, are other significant platforms that have made their reports public. According to Koo’s report, it effectively regulated 54,235 material pieces in June, whereas its users reported 5,502 posts.
In compliance with the new IT rules, Google was the first to issue a transparency report on 30 June. The report addressed complaints received and resolved between 1 and 30 April. According to Google, there would be a two-month delay in reporting to allow data processing and validation. In the reporting period, Google received a total of 27.762 complaints, of which 96.2 percent were linked to copyright infringement. Google performed 59,350 takedown actions based on these complaints.