Artificial intelligence (AI) is a powerful tool in many ways. In recruiting, it helps automate tasks that most recruiters don’t want to do, or don’t have time to do, such as sorting through large piles of job applicants to find relevant matches, or deciding where to post jobs to get the best candidate response.
AI frees up recruiting professionals to do more value-added work, like interacting with candidates and hiring managers and ensuring a seamless recruiting and interview process.
However, AI can be a double-edged sword. If not used properly, it can introduce bias into the recruiting process.
Here are some tips on how to prevent your AI from inadvertently biasing which candidates get recommended for opportunities and which jobs are recommended to them.
Exclude demographic data from your algorithms
Are you tracking the gender or ethnicity of candidates? This is great data to track for EEO purposes. But make sure your AI is not using these data fields in its matching. If it is, it will lead to bias.
For example, consider an app that recommends jobs to jobseekers and tracks what jobs they apply to. In this theoretical case, if women tend to apply less often to engineering roles, even if they are qualified, the AI may use this behavior to stop recommending engineering jobs to women, creating a gender bias.
This is a made-up case, but it illustrates the potential consequences of using tracking data in your AI algorithms. Track candidate demographics, but make sure you only use this data for your tracking and reporting purposes, and not in your matching.
Remove identifying information
Remove names, photos and anything identifiable from candidate profiles and resumes. This is an easy and obvious step to help prevent bias when looking at candidates for a job. By taking out any identifiable information, you are more likely to look at the merits of a candidate.
When AI parses resumes and profiles, it usually extracts keywords and phrases to identify skills and experience. In some cases, the AI may also inadvertently extract identifiable information about a person and start making matches based on this data.
While most systems are designed not to do this, it’s better to be careful and take out the name and identifiable information.
Do a reality check
When in doubt, you can usually run tests to ensure your AI is not generating biased results. For example, do you have AI that matches candidates in your ATS to jobs you have open? Try running the matching product against a list of female candidates and then against a list of similar male candidates. What types of jobs are recommended for female candidates? How about male candidates? Is there a discrepancy? And if so, why?
It could be that the candidate groups are different and therefore are matched to different types of jobs. Or perhaps, there may be some bias in your system. Do some digging to find out if there is possible bias in your AI applications.
AI encompasses a powerful set of tools that can help recruiters focus their time on the tasks that add more value, such as partnering with business leaders, and engaging with candidates, and doing less busy work that can be automated.
But in order to bring all worthy candidates to the forefront, regardless of age, gender or ethnicity, it falls on us humans to properly train the Artificial Intelligence. Doing so ensures that AI does what we intend it to do, and does not create unintended consequences.
This article was originally published on TAtech.org, the association for talent acquisition solutions.