News — Nearly every Fortune 500 company now uses some form of artificial intelligence to help them hire the best talent by screening resumes or analyzing test performance. But these AI hiring tools are probably spitting out worse candidates than hiring managers expect, according to new research.

The weakness comes down to a simple case of human-machine miscommunication: The AI thinks it is picking someone to hire, but the hiring manager just wants a list of promising candidates to interview. 

“When we ask these algorithms to select the 10 best resumes, we know we are not directly hiring these 10 people. We know there is a second stage of interviewing, but the AI doesn’t know that,” said , a professor of management in the University of Florida’s and lead author of the new study. 

“If you don’t specify that there are additional steps, the AI system might select 10 candidates that are good, but safe, choices. But if you tell the AI system there will be another round of screening by people, it might different, and potentially stronger, candidates,” Xu added.

Xu and her co-author, UF management professor , developed a new algorithm that accounts for the realities of the hiring process. In a test using data on thousands of employees from a Fortune 500 company, the improved algorithm saved 11% on human interviewing costs while meeting goals for identifying high quality applicants and avoiding bias. 

The problem Xu and Zhang identified in current AI hiring tools is one of selection versus screening. The researchers focused on algorithms designed to reduce bias in hiring, known as fairness-aware algorithms. These AI resume screening programs are typically instructed to select the top candidates to hire.

But in the real world, hiring managers are simply looking for candidates to interview – nobody is trusting AI hiring tools to hire directly.  

And in most stacks of resumes, there are certain candidates that are likely to be either very strong or very weak. If forced to choose, the algorithm will avoid these risky candidates. But if instead asked to provide a pool of potential hires, the algorithm can suggest the high risk-high reward candidates and let their human partners suss out their true potential.

What’s more, an algorithm instructed to select hires instead of screening them can introduce entirely new biases into the hiring system, the researchers found.

So, Xu and Zhang modified an existing fairness-aware algorithm to clarify that it was screening candidates, not hiring them. In tests on data based on almost 8,000 employees, the updated AI resume screening algorithm identified stronger candidates and reduced hiring costs.

“We’re trying to send the message that we have to rework the language we use for these human-AI teams, otherwise the AI may optimize on poorly specified goals,” Xu said. “Our point is, sometimes we need to take responsibility, because we didn’t specify the task clearly.”

Xu and Zhang are publishing their findings in an upcoming issue of the journal Production and Operations Management. The research was funded in part by the National Science Foundation, Amazon Science and Meta.