According to US media reports, Facebook internal documents show “a struggle with misinformation, hate speech, and celebrations of violence” in India, the company’s largest market, with researchers pointing out that there are groups and pages “replete with inflammatory and misleading anti-Muslim content” on its platform.
According to a report published on Saturday by The New York Times, a Facebook researcher built a new user account in February 2019 to see how the social media platform would look for someone living in Kerala.
“For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site. The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month,” the NYT report said.
The report based on disclosures obtained by a consortium of news organisations, including the New York Times and the Associated Press, stated, “Internal documents reflect a struggle with misinformation, hate speech, and celebrations of violence in the country, the company’s biggest market.”
The documents are part of a larger collection amassed by former Facebook employee Frances Haugen, who recently testified before the Senate regarding the business and its social media platforms.
Internal records include information on how bots and phoney accounts linked to the country’s ruling party and opposition personalities wreaked havoc on national elections, according to the report.
According to the New York Times, Facebook determined that over 40% of top views, or impressions, in the Indian state of West Bengal were fraudulent or inauthentic in a separate assessment released after the 2019 national elections.
One fake account had racked up over 30 million impressions.
Facebook researchers noted in an internal paper titled ‘Adversarial Harmful Networks: India Case Study’ that there were groups and pages on Facebook packed with inflammatory and deceptive anti-Muslim information.
Facebook researchers noted in an internal paper titled ‘Adversarial Harmful Networks: India Case Study’ that there were groups and pages on Facebook packed with inflammatory and deceptive anti-Muslim information.
Internal documents also show how a concept “pioneered” by Facebook founder Mark Zuckerberg to focus on “meaningful social interactions” led to greater misinformation in India, particularly during the pandemic, according to the records.
Another Facebook report, according to the NYT, showed Bajrang Dal’s efforts to spread anti-Muslim propaganda on the platform.
“Facebook is considering designating the group as a dangerous organisation because it is inciting religious violence on the platform, the document showed. But it has not yet done so,” the NYT report said.
According to the documents, Facebook lacked sufficient resources in India and was unable to address the issues it had introduced, such as anti-Muslim messages.
According to Andy Stone, a Facebook spokesman, Facebook has cut the amount of hate speech users see internationally in half this year.
“Hate speech against marginalised groups, including Muslims, is on the rise in India and globally, Stone said in the NYT report. So we are improving enforcement and are committed to updating our policies as hate speech evolves online.”
“There is definitely a question about resourcing” for Facebook in India, but the answer is really not “just throwing more money at the problem,” according to Katie Harbath, who worked as a director of public policy at Facebook for ten years and was intimately involved in securing India’s national elections.
According to the New York Times, Facebook staff have been conducting testing and field studies in India for several years.
This effort has increased in the run-up to India’s national elections in 2019.
According to the company, a few Facebook personnel travelled to India in late January 2019 to meet with colleagues and interact with dozens of local Facebook users.
“According to a memo written after the trip, one of the key requests from users in India was that Facebook ’take action on types of misinfo that are connected to real-world harm, specifically politics and religious group tension’,” the report said.
According to an internal document titled ‘Indian Election Case Study,’ Facebook put in place a series of actions after India’s national elections began to halt the flow of disinformation and hate speech in the country.
“The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners the third-party network of outlets with which Facebook works to outsource fact-checking and increasing the amount of misinformation it removed.”
“The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots or fake accounts linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.”
According to the New York Times, Facebook has trained its AI systems on five of India’s 22 officially recognised languages, citing a Facebook report. However, it lacked sufficient data in Hindi and Bengali to fully regulate the content, and much of the content directed towards Muslims “is never flagged or actioned.”