Study Reveals Unexpected Bias Against Anglo-Saxon Names in AI Job Interviews

November 21, 2024
Study Reveals Unexpected Bias Against Anglo-Saxon Names in AI Job Interviews
  • Celeste De Nadai's recent study at KTH in Stockholm investigates bias in current-generation large language models (LLMs), particularly focusing on gender and cultural name associations.

  • The research evaluated three AI models: Google's Gemini-1.5-flash, Mistral AI's Open-Mistral-nemo-2407, and OpenAI's GPT4o-mini, using 24 job interview questions.

  • To assess the models' performance, researchers created 200 discrete personas, analyzing responses based on variations in name, gender, and cultural background.

  • The findings revealed that male names, especially Anglo-Saxon names, were rated less favorably in mock software engineering interviews, indicating an inherent bias in the models.

  • A total of 432,000 inference calls were made, leading to the conclusion that there is a bias against men with Anglo-Saxon names, contrary to expectations that such names would be favored.

  • De Nadai theorized that this unexpected bias might arise from an over-correction intended to address previous biases against minority names.

  • Despite efforts to adjust the models, the study concluded that biases cannot be fully eliminated, highlighting the need for a nuanced approach to tackle these issues.

  • The research suggests that improving fairness in AI evaluations requires detailed grading criteria and the masking of names and genders to mitigate bias.

  • Notably, major AI companies like Google, OpenAI, and Mistral AI did not respond to requests for comment regarding the study's findings.

  • De Nadai's interest in this topic was sparked by earlier studies that pointed out biases in older AI models, emphasizing the necessity for research using larger datasets with newer models.

Summary based on 1 source


Get a daily email with more AI stories

More Stories