Twitter investigating why its photo preview has apparent racial bias
Users of Twitter have uncovered a problem with the neural network that crops photo previews. The social networking company is investigating why this AI algorithm it uses to generate photo previews chooses to show white people’s faces more frequently than black faces.
Over this past weekend several users highlighted the issue and many more tested it to confirm with different examples. They demonstrated with multiple examples where a post had both black and white faces in it, and the previewing tool showed white faces more often.
While this is by no means a scientifically significant sample (yet), the testing has delivered the startling results. Many of the tests puts two faces either next to one another or above and below one another, meaning the image is either too tall or wide to display both faces. Testing putting the white face on either side or below and above more often than not showed the white face in the preview.
“Previously, we used face detection to focus the view on the most prominent face we could find. While this is not an unreasonable heuristic, the approach has obvious limitations since not all images contain faces. Additionally, our face detector often missed faces and sometimes mistakenly detected faces when there were none. If no faces were found, we would focus the view on the center of the image. This could lead to awkwardly cropped preview images,” a Twitter employee wrote.
The same issues appeared when using similarly well-known politicians.