Hiring is a very consequential process for both employers and candidates. But despite several decades of efforts to enhance it, all involved acknowledge that there is much room for improvement in the recruitment process—be it the way resumes are screened, how candidates are shortlisted, the conduct of the interviews, or how hiring decisions are made. One of the main criticisms is that subjectivity and human biases creep into hiring decisions. Not only does this deprive deserving candidates of opportunities, but the recruiting companies also don’t fill their open positions with the candidates who are the best fit. However, organizations are subject to several employment laws that seek to prevent discrimination based on race, religion, gender, age, disability, and other factors.THE PROS AND CONS OF AI
In this context, the prospect of using artificial intelligence (AI) in the recruitment cycle is quite appealing. AI tools offer the promise of making the process more objective by grounding the decisions in data, better evaluating the resumes (after all, human recruiters spend only a few seconds reading a resume), and shortlisting candidates for interviews. Given that the volume of applications received can be quite high, automating many steps of the process makes it possible to conclude the search in a short amount of time. AI tools are also increasingly used not only for candidate screening, but for video interviews and assessments as well. Job descriptions are created using AI tools, and algorithms decide who the (online) job ads are shown to.
These tools are usually built by HR technology vendors specializing in the domain, but with the rising popularity of generative AI, HR teams are also using commonly available tools like ChatGPT (or wrappers built on them).
However, when it comes to using AI for hiring, both traditional AI and generative AI tools are fraught, with many risks. Chief among them is that we may be simply replacing human biases with algorithm biases—and are none the wiser for it. Consider the following examples of AI biases that have been observed in the last few years:
- An AI-based tool used for interviews was seen as disadvantaging candidates based on their race and gender.
- The U.S. Equal Employment Opportunity Commission found that the AI tools used by an organization providing tutoring services was rejecting older candidates.
- Male candidates were rated higher for leadership roles despite others having equal qualifications.
- Resumes with photos of lighter-skinned candidates were prioritized higher during screening.
- Candidates with non-Western names were prioritized lower during screening.
- Similarly, AI screening systems penalized candidates with cultural dress in their profile pictures.
- Video interview tools penalized candidates with non-native accents.
- Applications that mentioned disabilities were ranked lower by AI tools.
- Ads for higher-paying jobs were shown more frequently to men than to women.
- AI tools downranked candidates with shorter careers, impacting younger professionals.
- AI systems penalized applicants with non-standard career paths.
- Resumes with the words “maternity” or “caregiver” were penalized, impacting women.
- Career gaps led to lower AI ratings, disproportionately affecting mothers.
- Career gaps for caregiving responsibilities were penalized in AI screening.
- Volunteer experiences common among women were not valued in screening.
This is only a few examples, many of which are actually prohibited by law. To be clear, it is not that AI vendors set out to create such outcomes—it so happens that the data used to train the AI tools reflects a faulty baseline, which the tools try to mimic or mirror. Even if factors such as age, race, and gender are not included in the training data, they may be strongly correlated with other attributes, so the algorithm picks up these signals in the data.
WHAT AI CAN DO
So, what is the solution? It’s not my argument that AI does not have a role in hiring. It can improve the hiring process in many ways. For example, AI can be used to generate job descriptions that are more inclusive and more accurately reflect the job requirements. It can be used to provide disability accommodations. It can be used to generate assessments and screenings that take the job duties into better account. It can help standardize the interview process and help provide feedback to managers about their interview process.
Screening out or shortlisting candidates is a high-stakes decision, and so the bar for AI tools should be higher. I have a few suggestions that can help:
- Train HR teams on the potential biases of AI tools.
- Understand how the HR-related AI vendors are training their AI models, the data used, and whether that is fit for purpose in your context.
- Consider picking HR-related AI vendors that incorporate responsible AI principles—fairness, accuracy, and transparency—in their solutions.
- Periodically audit the AI tools and check for disparate treatment and disparate impact.
- Understand that AI and generative AI are not silver bullets. Understand their strengths and their limitations and use them prudently.