These days, a contest is going viral to scrutinize a Twitter image cropping artificial intelligence algorithm, and identify any bias that it shows towards faces or people of a certain, say, color and age, among other things. Through the same, it has been found that a Twitter programme which runs on that particular photo cropping AI discriminates by age and weight, while also being biased towards Western languages, including English. The AI crops the image after identifying its most important areas, and previously, had been the target of serious backlash when reports emerged, claiming that it favored white, female faces during auto-cropping. Following the same, the social media giant had vowed not to use it, back in May.
Biased Depending On Age, Gender, Race
The contest in question is known as the “Algorithm Bias Bounty Challenge,” and asks people to submit the results they obtained after treating images to the cropping tool, so that any discrepancy in the same could be noted, which might have negative impacts on users by “failing” to process images perceived as “natural.”
The top entry for the same is credited to Bogdan Kulynych from Switzerland, who used a deepfake technology to show that the tools favours thinner, younger-looking people over others. These results are backed by Patrick Hall from AI consulting firm BNH, who adds that the number of bias cases in AI is rising, particularly due to the lack of regulation. Another entry comes from Ariel Herbert-Voss from OpenAI, who has reported that these findings basically reflect the biases of the people who helped design the AI, while also adding that a thorough analysis of the tool could help make the same more inclusive.
These thoughts are backed by other AI experts, who also believe that if the algorithm of the tool become accessible to third parties, it might become possible to arrive at the exact problem. Amit Elazari, Director of the Global Cybersecurity Police at Intel, finds this prospect “exciting,” and hopes to “see more of it.”
How It Started
The discriminatory behaviour of the AI first came to light back in September 2020, when a Canadian student threw light on how it crops images. The report shows how the algorithm “zeroes-in” on faces, as well as other areas of interest in the image, like objects, animals, or text. However, it apparently doesn’t do so without a bias, which, in this case, is towards white, feminine faces.
Soon enough, other instances of racial and gender-based discrimination on the programme’s part, were noted, apart from cases of many other biases. For example, it has also been found that the tool is biased against people who have white hair, and prefers Latin script over Arabic script, thereby discriminating against Eastern and Central languages. And now, it has also been established that this Twitter photo cropping AI also discriminates by age and weight, among other things.