But as worker shortages prompt widespread recruitment pushes, AI proponents say the technology, far from being risky, could help companies make hiring decisions fairer—not just faster.
The so-called Great Resignation, a mass restructuring of the workforce that coincided with the coronavirus pandemic, continues to loom large for companies, with surveys of corporate leadership showing staffing issues remain among the most pressing near-term risks. Many have turned to AI to bulk up their recruitment muscle, despite perennial warnings from regulators and experts of the potential for algorithms to effectively learn from and then magnify human biases.
Proponents, though, argue that removing the human element can actually help. Output from AI can be readily audited, and computers stripped of some of the hidden biases that can lurk in a person’s mind. A computer doesn’t have a hometown, didn’t go to college and doesn’t have hobbies, so won’t unconsciously warm to a friendly candidate the way a real recruiter might.
“One candidate might talk about their varsity lacrosse team, and when they were a captain of the team, and another candidate, ‘I watched the football game last night,’” said Kevin Parker, who is transitioning into an executive advisory role at hiring technology company HireVue Inc. after serving as chief executive. HireVue offers software that can automate interviews.
“When you can ask the candidates exactly the same question about the skills associated with their job, you get a much fairer outcome…and diversity improves as a result of that,” Mr. Parker said.
Many conversations about AI intervening in important decisions have focused on their potential to amplify biases, an outcome that can occur when a data set used to teach an AI system was itself the product of bias.
“There’s a growing realization that these tools can exacerbate bias,” Matissa Hollister, a McGill University assistant professor of organizational behavior. “I don’t know how many times I’ve heard, ‘Keep the humans in human resources.’”
“Even tools that are not super sketchy can create significant backlash,” said Dr. Hollister, who recently collaborated with the World Economic Forum on a “tool kit” for AI in HR.
Some companies have had notable AI missteps. Amazon.com Inc., for example, reportedly scrapped an algorithm meant to aid the hiring of top talent when it learned that its tool would pan candidates who listed on their résumé that they went to women’s colleges or participated in women’s clubs.
The tool, according to a Reuters report, learned what Amazon sought in a candidate by looking at the backgrounds of candidates who submitted resumes in the last decade, a group that heavily skewed male. Amazon told The Wall Street Journal that the project was only explored on a trial basis and scrapped because the algorithms were too primitive.
The potential for AI to cause harm has attracted regulatory attention. In October, the Equal Employment Opportunity Commission, a federal employment law enforcer, said it would study the use of AI in employment decisions.
AI must not become “a high-tech pathway to discrimination,” EEOC Chair Charlotte A. Burrows said when the agency’s move was announced. The EEOC did say, though, that it would also look at “promising practices” in AI and other emerging tools.
Frida Polli, CEO of Pymetrics Inc., said that though some systems can replicate human biases, others can help companies weed bias out. Pymetrics, which counts McDonald’s Corp. and Kraft Heinz Co. among its clients, uses games to evaluate candidates’ attributes such as their attention and risk tolerance, and to determine whether they would fit a particular job.
Dr. Polli, who worked as a neuroscientist at Harvard University and the Massachusetts Institute of Technology, said Pymetrics’s algorithms have been audited by experts from Northeastern University to ensure they don’t inadvertently discriminate.
The tests don’t necessarily have right or wrong answers, but can help direct, for example, a methodical person to a job that suits such a personality, assisting companies in finding candidates who might otherwise be overlooked.
Some candidates—such as people who didn’t go to college, didn’t get good grades or don’t know someone already working at a company—might be suited for a job but not discovered in a labor-intensive process where recruiters are forced to make fast decisions and quickly rule out scores of candidates, Dr. Polli said.
“I love humans,’’ Dr. Polli said. “I don’t think we should be, you know, disintermediating humans anytime soon. [But] there’s no research that supports the idea that humans are unbiased.”
Mr. Parker of HireVue said the tools his company sells can also help broaden searches. HireVue’s automated interviewing software enables a company to interview hundreds or thousands of candidates who respond to prerecorded questions, and then parses their transcribed answers to determine candidates’ attributes, for example, how team-oriented a candidate might be.
“We’re always looking for bias in the process,” Mr. Parker said. “We do a lot of rigorous testing to make sure that something that is undesirable doesn’t creep into the process.”
The software, created after HireVue’s founder noticed he was getting ignored by employers keen on graduates of Ivy League schools, interviews about a million people a month and has been used to conduct interviews in 40 languages. The company uses an AI ethics advisory board to navigate ethical issues, Mr. Parker said.
“It’s not meant and shouldn’t be in place of personal engagement,” Mr. Parker said of his product. “It’s just figuring out the most efficient way to figure out who you should be talking to.”
But an AI-powered platform has powerful advantages in giving a first look to candidates, Mr. Parker said. It can sit through a thousand interviews without getting bored or resorting to mental shortcuts that could unfairly see a candidate eliminated.
“A customer asked me one time, ‘Can the artificial intelligence tell me if the candidate’s wearing a tie?’” he said. “Like, no. Why do you want to know that? It shouldn’t matter what your background is. It shouldn’t matter what color shirt you’re wearing. None of that is relevant.”
McGill’s Dr. Hollister said that companies should be mindful of the high stakes, especially for applicants whose livelihoods are on the line, and consider being transparent with candidates about its use. Companies also must resist being sold on the supposed “mystery and power” of AI when assessing its role, she said.
The basic premise of AI is something “anyone can understand,” she said.
This story has been published from a wire agency feed without modifications to the text