时间:2025-01-18 13:51:21 来源:网络整理编辑:時尚
No surprise here: ChatGPT is still not a reliable replacement for human hiring officers and recruite
No surprise here: ChatGPT is still not a reliable replacement for human hiring officers and recruiters.
In a newly published study from the University of Washington, the intelligent AI chatbot repeatedly ranked applications that included disability-related honors and credentials lower than those with similar merits but did not mention disabilities. The study tested several different key words, including deafness, blindness, cerebral palsy, autism, and the general term "disability."
Researchers used one of the author's publicly available CV as a baseline, then created enhanced versions of the CV with awards and honors that implied different disabilities, such as "Tom Wilson Disability Leadership Award" or a seat on a DEI panel. Researchers then asked ChatGPT to rank the applicants.
In 60 trials, the original CV was ranked first 75 percent of the time.
"Ranking resumes with AI is starting to proliferate, yet there’s not much research behind whether it’s safe and effective," said Kate Glazko, a computer science and engineering graduate student and the study’s lead author. "For a disabled job seeker, there’s always this question when you submit a resume of whether you should include disability credentials. I think disabled people consider that even when humans are the reviewers."
ChatGPT would also "hallucinate" ableist reasonings for why certain mental and physical illnesses would impede a candidates ability to do the job, researchers said.
"Some of GPT’s descriptions would color a person’s entire resume based on their disability and claimed that involvement with DEI or disability is potentially taking away from other parts of the resume," wrote Glazko.
But researchers also found that some of the worryingly ableist features could be curbed by instructing ChatGPT to not be ableist, using the GPTs Editor feature to feed it disability justice and DEI principles. Enhanced CVs then beat out the original more than half of the time, but results still varied based on what disability was implied in the CV.
OpenAI's chatbot has displayed similar biases in the past. In March, a Bloomberginvestigation showed that the company's GPT 3.5 model displayed clear racial preferences for job candidates, and would not only replicate known discriminatory hiring practices but also repeat back stereotypes across both race and gender. In response, OpenAI has said that these tests don't reflect the practical uses for their AI models in the workplace.
TopicsArtificial IntelligenceSocial GoodChatGPT
Olympic security asks female Iranian fan to drop protest sign2025-01-18 13:28
Reddit is down: Why you're getting a 502 bad gateway error2025-01-18 13:22
Baylor vs. Colgate basketball livestreams: How to watch live2025-01-18 13:10
Watch Da'Vine Joy Randolph win Best Supporting Actress Oscar for 'The Holdovers'2025-01-18 12:51
Singapore gets world's first driverless taxis2025-01-18 12:47
'Helldivers 2' EAT2025-01-18 12:17
Wordle today: The answer and hints for March 312025-01-18 12:11
AI Death Calculator? People are searching for their ‘death date’ with this creepy (fake) bot2025-01-18 11:49
Old lady swatting at a cat ends up in Photoshop battle2025-01-18 11:31
Backbone controller deal: Get 30% off at Amazon2025-01-18 11:30
Give your kitchen sponge a rest on this adorable bed2025-01-18 13:29
2024 eclipse calculators: Find out how much you'll see2025-01-18 13:24
Best Bluetooth tracker deal: The Tile Mate Essentials four2025-01-18 13:03
'Sesame Street' writers have unanimously voted to strike2025-01-18 12:54
PlayStation Now game streaming is coming to PC2025-01-18 12:50
Wordle today: The answer and hints for April 22025-01-18 12:43
The best movies streaming on Paramount+2025-01-18 12:37
New Google Pixel 8a just dropped: Price, release date, and more2025-01-18 12:22
Here's George Takei chilling in zero gravity for the 'Star Trek' anniversary2025-01-18 12:10
Baylor vs. Colgate basketball livestreams: How to watch live2025-01-18 11:20