
Source: Venture Beat
Recently, a team of engineers from Meta’s Facebook came across a “massive ranking failure.” This exposed about half of all views of its News Feed to potential “integrity risks” over the last six months. Internal documents revealed how the engineers first caught the issue in October 2021. This was when an unexpected wave of misinformation began flowing in the News Feed.
The feed did not work to suppress posts from misinformation offenders. Facebook’s internal network responsible for fact-checking of outsider issues identified these offenders to have repeatedly engaged in spreading such misinformation. Instead, the News Feed was giving these posts distribution increasing worldwide views by 30% at least. These engineers failed to find the source and observed it subside following a few weeks. However, they saw it rise again repeatedly until the problem of the ranking was finally fixed on Friday, March 11.
The network also noticed the social media platform’s inability to demote possible nudity, violence, along with Russian state media outlets that was recently targeted owing to the war. This issue was internally labeled as a ‘level one SEV.’ This is mainly designed for high-priority technical cases such as the block on Facebook and Instagram by Russia.
According to a spokesperson from Meta, these “inconsistencies” came up on five distinct occasions. The first one, from 2019, hardly created much impact till last year.
We traced the root cause to a software bug and applied needed fixes,” said Osborne, adding that the bug “has not had any meaningful, long-term impact on our metrics” and didn’t apply to content that met its system’s threshold for deletion.
The social media giant has always considered downranking as a way to improve its News Feed’s quality, steadily expanding the types of content that the system acts on. It has been used in response to wars and controversial political stories. Along with it, emerging concerns of shadow banning and calls for legislation. However, Facebook is yet to reveal how it effects what users actually view, along with the consequences of the system acting up.
Downranking works to suppress what the platform refers to as “borderline” content that is somewhat close to violation of its rules. Moreover, it also suppresses content that its AI systems detect as violations which might require further human review. Often, Facebook’s leaders brag about its AI systems improving each year at identifying content like nudity and hate speech. Moreover, they even announced the platform’s initiative start downranking political content as an effort to go back to its ‘lighthearted roots.”