Google India took down 61,114 content pieces in November 2021, with 60,387 (~98.8 percent) of the removal actions dealing with copyright issues, the search engine giant said in its monthly transparency report.
The overall number of content pieces removed is 25.7 percent greater than the 48,594 removals in October. Google India purged 76,967 content pieces in September. The tech giant evaluates content reported under its community guidelines, content regulations, and legal policies, according to the compliance report. For one of the request reasons, removal action may be taken based on this review.
Apart from copyright infringement, other reasons for content removal in November included trademark infringement, circumvention, court orders, and graphic sexual content, according to the report. Likewise, Google India reported that automatic detection resulted in the removal of 375,468 content pieces in November, a decline of 2.3 percent from the previous month’s 384,509 instances.
To combat the spread of harmful content such as child sexual abuse material and violent extremist content, the tech conglomerate uses automated detection systems in the majority of its products. It received 26,087 complaints from individual users across the country last month. “These complaints relate to third-party content on Google’s SSMI platforms that is believed to infringe local laws or personal rights. We balance privacy and user protection to quickly remove content that violates our Community Guidelines and content policies; restrict content (e.g., age-restrict content that may not be appropriate for all audiences); or leave the content live when it doesn’t violate our guidelines or policies.” the compliance report added.
The content was taken down under multiple categories, including copyright (60,387), trademark (535), circumvention (131), court order (56), and graphic sexual content (5). According to Google, a single complaint may identify many items which may or may not belong to the same or separate pieces of content, and each unique URL in a particular complaint is classified as an individual “item” that is removed.
Since May 2021, top social media platforms such as Google, Facebook, WhatsApp, and Instagram, as well as Twitter and Koo, have been mandated to provide monthly compliance reports under the Intermediary Guidelines and Digital Media Ethics Code Rules. The above-mentioned social media platforms together removed/banned/moderated 106.9 million content pieces/user accounts during the second quarter of the fiscal year 2021-22 (July to September).
For instance, Meta-owned WhatsApp banned 6.3 million accounts during the same time period, Instagram banned and actioned 8.2 million accounts and content, and Facebook’s total number for July-September was slightly less than 92 million. Twitter banned over 50,000 accounts, while its homegrown alternative, Koo moderated and actioned over 139,000 and 125,400 posts, respectively.