Twitter published on Sunday (20) an official apology for a racist bias image cutting algorithm, after complaints from several users who found that the feature automatically focused on white faces instead of black ones. The company, at first, claims to have tested the service before using it, but, in the face of various evidence, acknowledged that it was not enough.
When Twitter started using the smart cropping tool in 2018, the company’s explanation was that an algorithm would determine the most “salient” part of the image, the one where the user’s eyes are usually drawn, to make a snip as a preview of the preview photo.
As this word “salient” allows for a subjective interpretation, some speculations soon began about what exactly this focus would be. At first, it was clear that the priority item of the algorithm would be faces. But no one was able to determine whether they would be smiling faces, or serious, or whether their luminosity would have any influence on prioritization.
The biased cuts
These image slices are intended to prevent them from taking up too much space in the main feed and also to allow a larger number of images to be shown in the same tweet. However, a doctoral student named Colin Madland began to suspect a racial bias when using Zoom video conferencing software.
When posting a live image, portraying himself, Madland, who is white, realized that the image of a black colleague was deleted from Zoom’s call because the Twitter algorithm automatically cut the image, and showed only that of Madland.