Twitter is making changes to its artificial intelligence system as it investigates why it sometimes cut black people’s faces out of photos.
The social media giant said in a blog post this week that it was “committed to following the ‘what you see is what you get’ principles of design” after users discovered that the cropped previews of photos that users see when they scroll through their feeds often hones in on white faces when they’re pictured in the same image as black faces.
Programmer Tony Arcieri last week demonstrated the problem with an image that pictured both Senate Majority Leader Mitch McConnell and former President Barack Obama. The photo preview only showed McConnell’s face — even when Arcieri switched the position of their headshots and the color of their ties.
The same thing happened with another image featuring two cartoon characters from “The Simpsons”: Lenny, who is white, and Carl, who is black. Twitter cropped out Carl and only showed Lenny in the preview.
The new method will show the entirety of the photo that a user posts, rather than a preview.
“Bias in [machine learning] systems is an industry-wide issue, and one we’re committed to improving on Twitter,” CTO Parag Agrawal and chief design officer Dantley Davis wrote. “While no system can be completely free of bias, we’ll continue to minimize bias through deliberate and thorough analysis, and share updates as we progress in this space.”
The duo reiterated that Twitter had tested its neural network model for racial bias before rolling it out, but said he regretted not detailing their analysis so that it could be externally reproducible.
“There’s lots of work to do, but we’re grateful for everyone who spoke up and shared feedback on this,” the blog post said. “We’re eager to improve and will share additional updates as we have them.”
Twitter shares were down 1.6 percent at $45.96 Friday afternoon.