Wednesday, May 13, 2020

The Robots in HR: Using Artificial Intelligence to Evaluate Job Candidates

If you haven't heard, the robots are now making decisions about your career.  

That's a bit of hyperbole, at least for now, but artificial intelligence tools are being used increasingly by human resources departments to help evaluate job applicants at all stages of the hiring and promotion process, for example:
  • Even before a candidate applies, an AI candidate sourcing tool might be used to crawl LinkedIn and recommend qualified candidates to recruiting teams. 
  • Many people are aware that AI and other software tools screen resumes to help HR and hiring managers filter out clearly unqualified applicants. (ASIDE: I've even seen well-meaning people recommend that applicants copy and paste all of the keywords in the job posting into their resume in tiny, white font to ensure they are not screened out by software. I don't recommend this tactic; sophisticated tools are unlikely to be so easily tricked and may punish the applicant.) 
  • When applicants are interviewed (for an initial hire or promotion), AI may be used to review video of the interview (in real time or after the fact) to evaluate a candidate's competence and fit.
  • Even after a candidate has been given an offer, AI may help perform background checks.
  • AI may be used to evaluate an employee's ongoing performance (using a myriad of metrics).
image of metallic humanoid robot posing similar to Rodan's The Thinker sculpture
(C) istockphoto
In this post, I want to focus specifically on the use of artificial intelligence to evaluate videos of job interviews, about which fewer people seem to be aware.  Popular media outlets generally began reporting on the emerging trend in late 2018.  Tools like Deep Sense, Stafory (known as "Vera" international and "Wendy" in the US), HiredScore and HireVue use algorithms that capture candidates’ tone of voice, word choice, and facial expressions, including micro-expressions, and other data, and compare those with people who have previously been identified as high performers on the job.

Big Upside Potential:

The benefits of tools like these are that they have the potential to improve efficiency, lower costs, potentially improve quality, and have fewer errors than an exclusively human, manual process.  (Did I mention they save companies time and money?)

Risks Associated With Video Interview AI:

There are, however, risks associated with AI, which has led Gartner to include the risk of AI/Machine Learning Bias in its Top 10 Emerging Risks of Q1 2020. (Subscription required.)

Specifically, the risks involve potential unfairness to individuals.  The tools may be ill-equipped to deal with unusual cases, such as an applicant who is otherwise qualified but has suffered a stroke or has facial scars/injuries that affect their facial expressions.  What are the implications under the Americans with Disabilities Act in that situation? 

The tools can be biased in other ways.  For example, if the applicant has vocal tonality, body language and facial expressions that reflect their national origin, but the pool of successful employees did not include people from a similar national origin, the AI could score the applicant poorly.  To make it even more concrete, imagine that the data pool is drawn from employees native to the U.S., and that data is used to evaluate an applicant from India.  The Indian applicant may have a tendency to shake their head from shoulder to shoulder as an affirmative indication, whereas in North America the same gesture tends to indicate uncertainty, irritation, or indifference.  (Indian-Canadian stand-up comic Russell Peters has a bit about this.) In this way, the AI can encode discrimination into the evaluation process in ways that are inaccurate (harming the potential employer) and unfair (harming the applicant). 



In response to these criticisms, AI firms, and their advocates, point out that humans are probably more biased (consciously or unconsciously) than the AI tools, and that AI can actually serve as a control against the bias of an individual interviewer by providing a consistent scoring across a large number of interviews. Furthermore, advocates for AI point out that the solution to the bias problem is simply to use the tool more.  (Opinion: Ultimately, I think this may indeed be a critical aspect of a long-term solution to implicit bias.)

Legislators in Illinois were not content to wait for AI tools to accumulate enough data to correct their own biases, however.  They enacted the Artificial Intelligence Video Interview Act to mitigate the risk of unfair treatment, making Illinois the first state to regulate the practice, effective 1/1/2020.  (It's not surprising that Illinois is the first state to regulate in this area.  Illinois was also the first state to adopt a specific biometric data law, the Biometric Information Privacy Act, in 2008.)

Five Simple Requirements:

The AIVIA is quite concise, with just five requirements.  An employer must:
  1. notify applicants for Illinois-based positions before their video interviews are analyzed by AI;
  2. explain to applicants how the AI works and what general characteristics will be used to evaluate them;
  3. obtain applicants’ consent to the use of the technology; 
  4. limit the sharing of videos to those whose expertise or technology is necessary for the stated purposes; and 
  5. destroy videos within 30 days of a destruction request from the applicant (including all backup copies and copies held by third parties).
Criticisms of the AIVIA:

A downside to this brevity is the inherent ambiguity. There are a host of unanswered questions.  What must be included in the notice? How detailed must the description of AI be? Must consent be given in writing (or can it be on video, or even implied)? Must an applicant request destruction in writing? Is there an exception to the destruction requirement for other legal retention requirements or litigation holds? All of these questions and more await answers.

In addition, the AIVIA has other flaws, in my opinion.  Briefly, here are the three major shortcomings I see in the legislation:

  1. The AIVIA should regulate activity, not the input.  The AIVIA should be about the practice of using AI, not about the data stored in video format.  The data should be regulated by a broader data protection law with notice, consent, sharing limitations, and retention/destruction rights that apply to all personal information (or sensitive personal information).  
  2. The AIVIA is too narrow.  Analysis of video is just one of several ways in which AI is used to make important decisions about the rights, freedoms and opportunities of individual humans.  For example, AI analysis of audio recordings of interviews would not technically be covered by the AIVIA, and certainly AI analysis of resumes and cover letters is not regulated.  A comprehensive AI law is needed to avoid singling out a narrow use of AI for regulation while leaving other, similarly risky, uses of AI ungoverned.
  3. The AIVIA lacks important substantive protections.  It should include a substantive right to request human intervention or challenge an AI decision, as GDPR Article 22 provides in cases of "automated decision-making," and an audit requirement of some kind.  AI creators need not disclose their proprietary algorithms and the like, but a knowledgeable regulator should have the ability to review the processes for shortcomings that are likely to cause significant unfairness to individuals.    

Other Legal Concerns:


  • The (U.S., federal) Equal Employment Opportunity Commission probably expects employers to establish controls to mitigate the risk of AI bias.  The EEOC held a meeting way back in 2016 to address this issue.   Bloomberg Law reported in November that the EEOC was investigating at least two cases in which AI bias was alleged.  
  • There is also a bill to regulate AI before the U.S. Congress, On April 10, 2019, a group of House Democrats introduced the Algorithmic Accountability Act of 2019, which would regulate AI any “automated decision system” that impacts consumers. It seems unlikely to pass at this point.
  • As mentioned above, the use of AI to evaluate job candidates is likely to be considered "automated decision-making" and/or "profiling" under Article 22 of the GDPR, meaning an organization must (i) give European applicants information about the processing; (ii) provide simple ways for EEA applicants to request human intervention or challenge a decision; and (iii) regularly audit or check to ensure automated systems are working as intended.
  • Given the likelihood that other AI regulation is coming, organizations should think about adopting a common-sense, flexible, principles-based AI policy.  Doing so is probably the right move from a data ethics perspective, and may also help avoid future pain and effort as regulation in this area evolves.  


Sponsored Post: This blog post has been brought to you by the friendly automatons at Skynet.  

No comments:

Post a Comment