Saturday, May 30, 2020

Are The California Consumer Privacy Act Regulations Delayed?

***Update: The California Attorney General filed the final regulations on July 2nd, late, and requested expedited review so that they could be enforceable within 30 business days (as opposed to October 1 or even the 90 day period provided for by COVID-19 changes to the rules).  After having waited almost two years to finalize rules, and still missing the deadline, the Attorney General now expects companies to be able to react on short notice. ***

It appears the California Consumer Privacy Act regulations will not become effective on July 1st after all.

The latest round of revisions to the regulations, released by the California Department of Justice on March 11, triggered an additional 15 day public comment period, which ended in late March.  Following the comment period, the Department of Justice was required to consider any comments received, and submit the final text to the California Secretary of State so that the California Office of Administrative Law could review them.  (For the handful of legal nerds interested in the minutia, the entire administrative rulemaking process in California is described here.)

Under the ordinary rulemaking process in California, regulations become effective on one of four specific dates, based on when they are filed.  Rules filed between March 1 and May 31 are effective on July 1, but rules filed later (until August 31) are effective on October 1.
image of the California state flag (public domain)
It doesn't look like the California Department of Justice met the May 31 deadline, because the CCPA regulations are not showing up on the OAL's list of regulations under review as of today.  It seems possible that the regulations might not be effective until October 1.  It's also possible that the regulations could have been submitted and will still become effective on July 1. The OAL says that there are exceptions to the normal process "if the agency demonstrates good cause for an earlier effective date."

The statute itself required the California DOJ to promulgate rules by July 1 ("On or before July 1, 2020, the Attorney General shall solicit broad public participation and adopt regulations to further the purposes of this title"), which is also the date on which the DOJ must begin enforcing the CCPA. It now appears that the DOJ will begin enforcing the statutory language on July 1 but will not be able to enforce regulations until October.

[If you have any additional insights on this regulatory issue, please share them with me.]

Wednesday, May 27, 2020

Unlocking Your Next Career Move With Your Face: A New Facial Recognition Technology Law In Maryland For Employers to Consider


I wrote recently about the Artificial Intelligence Video Interview Act in Illinois, the first law of its kind regulating the use of A.I. in evaluating job candidates' interviews.  You can read all about it here.

image of dots and lines superimposed on a face
(c) Freepik
Maryland will have a similar law on October 1, 2020.  There are important differences, however.  Unlike the Illinois AIVIA, the Maryland law is not targeted precisely at the use of A.I. to evaluate interviews.  Instead, it addresses the use of facial recognition (as a biometric identifier for recognition or tracking). 

The new requirements:

The law simply states that job applicants must sign a simple waiver that includes (i) their name, (ii) the date, (iii) their consent to the use of "facial recognition during the interview," and (iv) a recital that the applicant has read the waiver.

You can read the statute here.

My opinion:

In my view, the policy decision here is correct, but the statute is under-inclusive. 
  • The statute is consistent with the general privacy principle of notice and consent at or before the time personal information is collected.  Beyond that, it is clearly the right thing for employers to notify applicants before using facial recognition, as a matter of personal data rights. 
  • While the brevity of the statute is appreciated, it leaves a number of questions unanswered, however, such as the effect of an applicant's refusal to consent.  It is unclear whether a verbal, recorded waiver would suffice.  Is the consent revocable?  The statute also fails to address the use of facial biometric data for purposes other than recognition and tracking (such as to evaluate interviewees' performance).  One presumes these issues will be litigated in the years to come, unless the Maryland legislature enacts a more comprehensive privacy law or at least a biometric data privacy law. (The Maryland Personal Information Privacy Act covers biometric data but only in the context of a data security breach.)
  • A more complete law would have a broader scope (e.g., all biometric data or all personal data) and would address other core rights such as use limitations, retention limitations, and the sale or sharing of a person's biometric data (compare to Texas' biometric data law).  

How to implement:

The waiver form prescribed by the new Maryland law should be very simple to draft (in a matter of minutes). 

Some organizations might consider whether to create or amend a policy statement governing the use of facial recognition data, and if so, whether to incorporate into an existing biometric data policy (perhaps adopted pursuant to Illinois' BIPA) or an existing A.I. policy.




Wednesday, May 20, 2020

There's a New Cop on the Privacy Beat in Canada

Is there a new sheriffMountie in town?

The Canadian Competition Bureau has just made waves by entering into a CA $9.5 million settlement with Facebook over privacy violations in connection with the Cambridge Analytica scandal.  The CB alleged that Facebook told users they could control access to their data, but the data was still accessible by third-party developers.

Canada has national and provincial data protection authorities (the national DPA is the Office of the Privacy Commissioner and provincial DPAs have similar names), but according to a law professor at the University of Ottawa, the OPC's enforcement authorities under the PIPEDA are weak.  The Competition Bureau, by contrast, has much greater enforcement powers, at least by Canadian standards. For large, international organizations like Facebook, a CA $9 million penalty (and $500k in costs) is hardly a deterrent.  By comparison, the US Federal Trade Commission fined Facebook USD $5 billion.  
 
What is to be done about the effete OPC?  The recent Digital Privacy Act, effective in 2018, clearly failed to strengthen the PIPEDA and the OPC sufficiently. (Here is my earlier post about that.) The Canadian government released a Digital Charter in May (2019) to outline proposed PIPEDA reforms.  The tenth and final priority of the Digital Charter is to strengthen enforcement and bring real accountability to bear.  Only time will tell if that happens.

Here you can read the actual settlement terms: https://decisions.ct-tc.gc.ca/ct-tc/cdo/en/item/471812/index.do

And here's a final, amusing link: The Competition Bureau's own Facebook page: https://www.facebook.com/competitionbureaucanada

image of laptop with a red maple leaf sticker

Tuesday, May 19, 2020

Highlights of the 2020 Data Breach Investigations Report by Verizon

There are a number of surveys and studies published each year that provide empirical data about the cybersecurity landscape. One of them is the Verizon Data Breach Investigations Report, which compiles publicly-reported breaches with other sources (including intelligence gathered by the Verizon Threat Research Advisory Center). The 2020 DBIR has just been published. This year, Verizon amassed 157,525 incidents and 108,069 breaches.  Here are some interesting findings:
  • Trojans were once the primary means of hacking, comprising as much as 50% of all breaches in 2016. They now comprise only 6.5%.
  • Phishing and credential theft are on the rise.
  • External attackers continue to be much more common than internal threats.
  • Money continues to be the main motive, far ahead of espionage.
  • The most common hacker profile is the organized crime ring, followed by state-sponsored actors.
  • According to the FBI, hackers are more likely to be close by than around the globe: 85% of victims and perpetrators were in the same country, 56%  in the same state and 35% in the same city.
  • Errors are becoming a more common source of breach.  Physical penetrations remain uncommon, but Misuse, Hacking, Malware and Social have all decreased as a percentage.   The most common type of incident was a DDoS, while the most common breach was Social (phishing). 
  • Errors (i.e., human errors such as mis-configurations, and mistakenly sharing data) saw a significant increase.  Almost half of all Errors were discovered by security researchers (not by an audit or other internal source).
  • Almost all (80%) hacks involved stealing or brute-forcing credentials.  Hackers did not rely as much on finding vulnerabilities or backdoors.  
  • Financially-motivated social engineering keeps growing in popularity year over year.
  • Although we often think of on-prem solutions as more secure than the cloud, cloud-based assets were involved in just 24% of breaches, while on-prem assets represented 70%. 
  • The most common attribute of affected data was that it was "personal" (PII/PI/PD), followed by credentials.
  • When looking for "patterns," Verizon found the most common was that breaches involved web applications. 
  • The Healthcare sector had the most breaches, followed by Finance, then a near-tie among Public, Education, Manufacturing, and Professional sectors.  
  • Retail saw only half as many breaches as the latter sectors.  Most Retail sector attacks involved e-commerce, a trend that grows year over year, and very few attacks involved point of sale (POS), representing a multi-year decline.  In Retail, 99% of breaches were financially-motivated.  Fewer than 20% of the data affected in Retail breaches was "internal" (trade secrets, business deals, etc.); the vast majority were types of personal data or payment information.
  • An organization's size has less relationship to the risk of breach this year than in recent years, probably due to the flight to the cloud, where large and small organizations are similarly vulnerable.  
  • Within the data analyzed, there were more than four times as many breaches in North America as in APAC or EMEA.

You can read all the details for yourself, if you have time, and you can learn even more at these links:


 







Wednesday, May 13, 2020

The Robots in HR: Using Artificial Intelligence to Evaluate Job Candidates

If you haven't heard, the robots are now making decisions about your career.  

That's a bit of hyperbole, at least for now, but artificial intelligence tools are being used increasingly by human resources departments to help evaluate job applicants at all stages of the hiring and promotion process, for example:
  • Even before a candidate applies, an AI candidate sourcing tool might be used to crawl LinkedIn and recommend qualified candidates to recruiting teams. 
  • Many people are aware that AI and other software tools screen resumes to help HR and hiring managers filter out clearly unqualified applicants. (ASIDE: I've even seen well-meaning people recommend that applicants copy and paste all of the keywords in the job posting into their resume in tiny, white font to ensure they are not screened out by software. I don't recommend this tactic; sophisticated tools are unlikely to be so easily tricked and may punish the applicant.) 
  • When applicants are interviewed (for an initial hire or promotion), AI may be used to review video of the interview (in real time or after the fact) to evaluate a candidate's competence and fit.
  • Even after a candidate has been given an offer, AI may help perform background checks.
  • AI may be used to evaluate an employee's ongoing performance (using a myriad of metrics).
image of metallic humanoid robot posing similar to Rodan's The Thinker sculpture
(C) istockphoto
In this post, I want to focus specifically on the use of artificial intelligence to evaluate videos of job interviews, about which fewer people seem to be aware.  Popular media outlets generally began reporting on the emerging trend in late 2018.  Tools like Deep Sense, Stafory (known as "Vera" international and "Wendy" in the US), HiredScore and HireVue use algorithms that capture candidates’ tone of voice, word choice, and facial expressions, including micro-expressions, and other data, and compare those with people who have previously been identified as high performers on the job.

Big Upside Potential:

The benefits of tools like these are that they have the potential to improve efficiency, lower costs, potentially improve quality, and have fewer errors than an exclusively human, manual process.  (Did I mention they save companies time and money?)

Risks Associated With Video Interview AI:

There are, however, risks associated with AI, which has led Gartner to include the risk of AI/Machine Learning Bias in its Top 10 Emerging Risks of Q1 2020. (Subscription required.)

Specifically, the risks involve potential unfairness to individuals.  The tools may be ill-equipped to deal with unusual cases, such as an applicant who is otherwise qualified but has suffered a stroke or has facial scars/injuries that affect their facial expressions.  What are the implications under the Americans with Disabilities Act in that situation? 

The tools can be biased in other ways.  For example, if the applicant has vocal tonality, body language and facial expressions that reflect their national origin, but the pool of successful employees did not include people from a similar national origin, the AI could score the applicant poorly.  To make it even more concrete, imagine that the data pool is drawn from employees native to the U.S., and that data is used to evaluate an applicant from India.  The Indian applicant may have a tendency to shake their head from shoulder to shoulder as an affirmative indication, whereas in North America the same gesture tends to indicate uncertainty, irritation, or indifference.  (Indian-Canadian stand-up comic Russell Peters has a bit about this.) In this way, the AI can encode discrimination into the evaluation process in ways that are inaccurate (harming the potential employer) and unfair (harming the applicant). 



In response to these criticisms, AI firms, and their advocates, point out that humans are probably more biased (consciously or unconsciously) than the AI tools, and that AI can actually serve as a control against the bias of an individual interviewer by providing a consistent scoring across a large number of interviews. Furthermore, advocates for AI point out that the solution to the bias problem is simply to use the tool more.  (Opinion: Ultimately, I think this may indeed be a critical aspect of a long-term solution to implicit bias.)

Legislators in Illinois were not content to wait for AI tools to accumulate enough data to correct their own biases, however.  They enacted the Artificial Intelligence Video Interview Act to mitigate the risk of unfair treatment, making Illinois the first state to regulate the practice, effective 1/1/2020.  (It's not surprising that Illinois is the first state to regulate in this area.  Illinois was also the first state to adopt a specific biometric data law, the Biometric Information Privacy Act, in 2008.)

Five Simple Requirements:

The AIVIA is quite concise, with just five requirements.  An employer must:
  1. notify applicants for Illinois-based positions before their video interviews are analyzed by AI;
  2. explain to applicants how the AI works and what general characteristics will be used to evaluate them;
  3. obtain applicants’ consent to the use of the technology; 
  4. limit the sharing of videos to those whose expertise or technology is necessary for the stated purposes; and 
  5. destroy videos within 30 days of a destruction request from the applicant (including all backup copies and copies held by third parties).
Criticisms of the AIVIA:

A downside to this brevity is the inherent ambiguity. There are a host of unanswered questions.  What must be included in the notice? How detailed must the description of AI be? Must consent be given in writing (or can it be on video, or even implied)? Must an applicant request destruction in writing? Is there an exception to the destruction requirement for other legal retention requirements or litigation holds? All of these questions and more await answers.

In addition, the AIVIA has other flaws, in my opinion.  Briefly, here are the three major shortcomings I see in the legislation:

  1. The AIVIA should regulate activity, not the input.  The AIVIA should be about the practice of using AI, not about the data stored in video format.  The data should be regulated by a broader data protection law with notice, consent, sharing limitations, and retention/destruction rights that apply to all personal information (or sensitive personal information).  
  2. The AIVIA is too narrow.  Analysis of video is just one of several ways in which AI is used to make important decisions about the rights, freedoms and opportunities of individual humans.  For example, AI analysis of audio recordings of interviews would not technically be covered by the AIVIA, and certainly AI analysis of resumes and cover letters is not regulated.  A comprehensive AI law is needed to avoid singling out a narrow use of AI for regulation while leaving other, similarly risky, uses of AI ungoverned.
  3. The AIVIA lacks important substantive protections.  It should include a substantive right to request human intervention or challenge an AI decision, as GDPR Article 22 provides in cases of "automated decision-making," and an audit requirement of some kind.  AI creators need not disclose their proprietary algorithms and the like, but a knowledgeable regulator should have the ability to review the processes for shortcomings that are likely to cause significant unfairness to individuals.    

Other Legal Concerns:


  • The (U.S., federal) Equal Employment Opportunity Commission probably expects employers to establish controls to mitigate the risk of AI bias.  The EEOC held a meeting way back in 2016 to address this issue.   Bloomberg Law reported in November that the EEOC was investigating at least two cases in which AI bias was alleged.  
  • There is also a bill to regulate AI before the U.S. Congress, On April 10, 2019, a group of House Democrats introduced the Algorithmic Accountability Act of 2019, which would regulate AI any “automated decision system” that impacts consumers. It seems unlikely to pass at this point.
  • As mentioned above, the use of AI to evaluate job candidates is likely to be considered "automated decision-making" and/or "profiling" under Article 22 of the GDPR, meaning an organization must (i) give European applicants information about the processing; (ii) provide simple ways for EEA applicants to request human intervention or challenge a decision; and (iii) regularly audit or check to ensure automated systems are working as intended.
  • Given the likelihood that other AI regulation is coming, organizations should think about adopting a common-sense, flexible, principles-based AI policy.  Doing so is probably the right move from a data ethics perspective, and may also help avoid future pain and effort as regulation in this area evolves.  


Sponsored Post: This blog post has been brought to you by the friendly automatons at Skynet.