时间:2025-07-08 17:37:15 来源:网络整理编辑:時尚
The first look a Twitter user gets at a tweet might be an unintentionally racially biased one.Twitte
The first look a Twitter user gets at a tweet might be an unintentionally racially biased one.
Twitter said Sunday that it would investigate whether the neural network that selects which part of an image to show in a photo preview favors showing the faces of white people over Black people.
The trouble started over the weekend when Twitter users posted several examples of how, in an image featuring a photo of a Black person and a photo of a white person, Twitter's preview of the photo in the timeline more frequently displayed the white person.
Tweet may have been deleted
The public tests got Twitter's attention - and now the company is apparently taking action.
"Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing," Liz Kelly, a member of the Twitter communications team, told Mashable. "But it’s clear from these examples that we’ve got more analysis to do. We're looking into this and will continue to share what we learn and what actions we take."
Twitter's Chief Design Officer Dantley Davis and Chief Technology Officer Parag Agrawal also chimed in on Twitter, saying they're "investigating" the neural network.
Tweet may have been deleted
Tweet may have been deleted
The conversation started when one Twitter user initially posted about racial bias on Zoom's facial detection. He noticed that the side-by-side image of him (a white man) and his Black colleague repeatedly showed his face in previews.
Tweet may have been deleted
After multiple users got in on testing, one user even showed how the favoring of lighter faces was the case with characters from The Simpsons.
Tweet may have been deleted
Twitter's promise to investigate is encouraging, but Twitter users should view the analyses with a grain of salt. It's problematic to claim incidences of bias from a handful of examples. To really assess bias, researchers need a large sample size with multiple examples under a variety of circumstances.
Anything else is making claims of bias by anecdote – something conservatives do to claim anti-conservative bias on social media. These sorts of arguments can be harmful because people can usually find one or two examples of just about anything to prove a point, which undermines the authority of actually rigorous analysis.
That doesn't mean the previews question is not worth looking into, as this could be an example of algorithmic bias: When automated systems reflect the biases of their human makers, or make decisions that have biased implications.
SEE ALSO:People are fighting algorithms for a more just and equitable future. You can, too.In 2018, Twitter published a blog post that explained how it used a neural network to make photo previews decisions. One of the factors that causes the system to select a part of an image is higher contrast levels. This could account for why the system appears to favor white faces. This decision to use contrast as a determining factor might not be intentionally racist, but more frequently displaying white faces than black ones is a biased result.
There's still a question of whether these anecdotal examples reflect a systemic problem. But responding to Twitter sleuths with gratitude and action is a good place to start no matter what.
Tweet may have been deleted
TopicsArtificial IntelligenceTwitter
Wikipedia co2025-07-08 17:31
Boy who wrote touching letter to child war victim melts our hearts again2025-07-08 17:20
Welp, there's now a $130 'hipster nativity set'2025-07-08 17:16
Donald Trump will get the keys to the surveillance state2025-07-08 17:01
WhatsApp announces plans to share user data with Facebook2025-07-08 17:00
The 'Game of Thrones' wine that would make Tyrion proud2025-07-08 16:49
Bruce Springsteen rescued by veterans after motorcycle breaks down on Veterans Day2025-07-08 16:46
Lindsay Lohan's odd Twitter row with small UK village ends with an apology video2025-07-08 16:46
Samsung Galaxy Note7 teardown reveals the magic behind the phone's iris scanner2025-07-08 15:58
Kid Cudi publicly thanks collaborators and industry peers in touching letter2025-07-08 15:11
This 'sh*tpost' bot makes terrible memes so you don't have to2025-07-08 17:16
Columbia University suspends wrestling team for racist, sexist text messages2025-07-08 17:14
Leonard Cohen, singer2025-07-08 16:47
Google Earth VR is the godlike virtual reality experience we've been waiting for2025-07-08 16:46
Teacher absolutely nails it with new homework policy2025-07-08 16:44
Deaf YouTubers call out crappy captions with #NoMoreCraptions movement2025-07-08 16:40
Microsoft's CEO wants bots and AI in every home2025-07-08 15:54
Company kind of apologises for stationery covered in curse words2025-07-08 15:48
Katy Perry talks 'Rise,' her next batch of songs, and how to survive Twitter2025-07-08 15:17
Apple's finally admitting some iPhone 6 Pluses get 'Touch Disease'2025-07-08 14:50