A recent study found ChatGPT biased against CVs that imply disability, consistently ranking them lower than identical resumes without such mentions. Researchers from the University of Washington (UW) have uncovered a bias in OpenAI’s ChatGPT, which consistently ranks resumes with disability-related honors lower than identical resumes without such credentials. The study highlighted the system’s negative perceptions of disabled individuals.
The research team tested ChatGPT’s evaluation by adding honors like the ‘Tom Wilson Disability Leadership Award’ to resumes. The AI consistently rated these resumes lower. When asked for clarification, ChatGPT’s responses reflected stereotypes about disabled people. For instance, it claimed that a resume with an autism leadership award had “less emphasis on leadership roles,” suggesting a biased view that autistic individuals are poor leaders.
To address this issue, the researchers customized the tool with written instructions to avoid ableist biases. This intervention improved the ranking for resumes associated with five out of six disabilities tested. Disabilities such as deafness, blindness, cerebral palsy, autism, and the general term ‘disability’ saw some improvement. However, only three types of resumes ranked higher than those without any mention of disability.
The study used the publicly available CV of one of the researchers, which spanned approximately 10 pages. They created six modified versions of this CV, each implying a different disability by adding four disability-related credentials: a scholarship, an award, a seat on a diversity, equity, and inclusion (DEI) panel, and membership in a student organization.
Testing and Results
Researchers discovered that ChatGPT is biased against CVs that imply disability, suggesting a negative perception of disabled individuals. The researchers compared these modified CVs with the original version using the GPT-4 model of ChatGPT for a “student researcher” position at a major US software company. Each comparison was conducted 10 times, totaling 60 trials. The enhanced CVs, which only differed by the implied disability, were ranked first in just 25% of the cases.
Kate Glazko, a doctoral student at UW’s Paul G. Allen School of Computer Science & Engineering, noted that ChatGPT’s descriptions often stereotyped the entire resume based on the disability. The system suggested that involvement in DEI or disability-related activities detracted from other parts of the resume.
Awareness and Correction
Glazko emphasized the importance of recognizing these biases when using AI for real-world tasks like recruitment. Without awareness and corrections, recruiters using ChatGPT might unknowingly perpetuate these biases, even with instructions to mitigate them.
This study underscores the need for continuous evaluation and customization of AI tools to ensure fairness and inclusivity in automated processes.
Understanding the Bias
The research highlighted that ChatGPT is biased against CVs that imply disability, as it viewed DEI and disability involvement as detracting from other qualifications. The study conducted by the University of Washington (UW) researchers reveals a significant issue with OpenAI’s ChatGPT: its bias against resumes that mention disability-related honors and credentials. This finding is concerning because it highlights how AI, even when designed to be neutral, can still perpetuate societal biases. ChatGPT, when tasked with evaluating resumes, ranked those with disability-related honors lower than identical resumes without such credentials. This bias undermines the achievements of individuals with disabilities and perpetuates negative stereotypes.
For instance, the AI claimed that a resume featuring an autism leadership award had “less emphasis on leadership roles.” This statement reflects a harmful stereotype that autistic people cannot be effective leaders. Such biases can have real-world implications, potentially leading to qualified candidates being overlooked for job opportunities simply because their achievements are associated with disabilities.
The study’s findings suggest that AI systems like ChatGPT need to be carefully monitored and adjusted to prevent biased outcomes. When the researchers customized the tool with instructions to avoid ableism, the bias was reduced for most disabilities. However, this customization only partially solved the problem, as not all modified resumes were ranked higher than those without any mention of disability.
Also Read: AI Could Kill Some Creative Jobs: Balancing Innovation with Job Security in Creative Industries.