Bloomberg did a clever experiment: they had OpenAI’s GPT rank resumes and found that it shows a gender and racial bias just on the basis of the name of the candidate.
The bias was different for different jobs. For example, “resumes labeled with names distinct to Black Americans were the least likely to be ranked as the top candidates for financial analyst and software engineer roles” and “GPT seldom ranked names associated with men as the top candidate for HR and retail positions, two professions historically dominated by women.”
This is worrying, because tools like OpenAI’s GPT are used more and more by companies in their recruitment processes.
This shows, once again, that the output of these generative AI models are a reflection of our structurally racist societies. As Dan McQuillan writes in his book on resisting AI:
AI has a tendency to punch down: that is the collateral damage that comes from its statistical fragility ends up hurting the less privileged.
And we shouldn’t address this by fixing the structure of the statistical AI model, instead we need to focus on fixing the structure of society.
See: OpenAI GPT Sorts Resume Names With Racial Bias, Test Shows at Bloomberg.
Image from the original Bloomberg article.