The Meta-owned instant messaging platform, WhatsApp in its most recent monthly compliance report said it banned around banned 20,69,000 accounts. The company stated that the account ban is a response to its efforts to combat abuse on the platform, as well as a result of “negative feedback” received from users via the “Report” option. It also stated that it received 500 grievance reports from India in the month of October.
This comprised 146 instances of account support, 248 ban requests, 95 related to product and related support, plus 11 grievance reports related to safety. WhatsApp deleted 18 accounts after receiving external ban requests. When compared to September, the figures are relatively lower. WhatsApp collected 560 grievance reports in India in September, which include 121 cases of account support, 309 ban requests, 98 support-related, and 32 grievance reports regarding safety.
WhatsApp in its compliance report said, “We are particularly focused on prevention because we believe it is much better to stop harmful activity from happening in the first place than to detect it after harm has occurred.”
These monthly compliance reports are mandated under Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which came into effect on May 25, 2021. The guidelines require highlighting two main aspects: grievances reported from users in India via WhatsApp’s grievance mechanisms and accounts actioned in India via WhatsApp’s prevention and detection methods for breaching Indian laws or the company’s terms of service.
The fifth report, produced under the New IT Rules, said that, in addition to grievances, it implements tools and resources as preventative actions, with a focus on minimizing harmful behavior from arising in the first place.
The report also highlights WhatsApp’s end-to-end encrypted messaging service, referring to it as an “industry leader”. “Although some of our users’ most personal moments are shared on WhatsApp, we use end-to-end encryption, which guarantees only the sender and receiver can access the contents of messages,” WhatsApp noted.
After the New IT rules came into effect, major social media platforms such as Google, Facebook, Instagram, and WhatsApp, as well as Twitter and Koo, have published monthly compliance reports. During the third quarter of the fiscal year 2021-22 (July to September), the aforementioned social media platforms collectively removed/banned/moderated 106.9 million content pieces/user accounts. Since the platform’s in-house Artificial Intelligence (AI) & Machine Learning (ML) moderators self-moderated the majority of the content or accounts, a small percentage were also taken into account as flagged by users.
Depending on the social media platform, important categories for user-reported and self-moderated posts included sexual and violent graphic material, spam, hate speech, terrorism, and suicidal content. In the same time period, Meta-owned WhatsApp banned 6.3 million accounts, Instagram banned and actioned 8.2 million accounts and content, and Facebook’s overall figure was just under 92 million.
Source: WhatsApp