Monday, December 14, 2020

Time Is Almost Up To Make Your Canadian Website Accessible

 

If your organization has a Canadian website (.ca), you should know that Ontario has a web accessibility law that may soon require remediation of your website. 

Under the Accessibility for Ontarians with Disabilities Act (AODA), all private organizations with more than 50 employees (for profit and non-profit) and all public-sector organizations are required to make their websites accessible.

The AODA has been around for a while, and it required all new websites built after 2014 to be WCAG 2.0 Level A compliant.  The law did not immediately require remediation of old content, but by January 1, 2021, all web content posted after January 1, 2012 must meet WCAG 2.0 Level AA compliance standards.  (Content created before 2012 does not have to be remediated.)

There is no right to sue under the AODA, so don't expect a flood of lawsuits like we've seen in the U.S.   There are administrative fines that are small for first offenders but hefty for repeat offenders (up to CA$15,000 per day for repeated, serious noncompliance), so ignore the requirement at your peril.

 

Sunday, December 6, 2020

Beware of Dark Patterns in Online UI/UX


Have you ever found yourself spending countless hours unintentionally falling down an internet rabbit hole? Have you noticed it is often remarkably easy to sign up for a free trial, but found the experience of unsubscribing Kafkaesque?  The manipulation tactics behind these experiences are known as "dark patterns," and they are emerging as a new area of legal regulation.

Relativity, by M.C. Escher, 1953

Technology companies now employ specialized professionals to manage the users' experience (UI/UX), and an many ways the tactics they employ can be quite helpful. When UI/UX engineering goes awry, as millions have recently learned thanks to the popular documentary film The Social Dilemma, users can suffer. 

The Electronic Frontier Foundation describes  dark patterns as the opposite of Privacy by Design, and worries the patterns will have dire consequences for personal privacy. When technology is designed to manipulate, obfuscate, or misdirect, one of the first harms the user is likely to experience is a loss of control over their personal information.  Financial and other harms are also likely.

Though common on social platforms, dark patterns are also found outside of social media. I am particularly interested in the retail sector these days, and a recent Princeton University study examined more than 10,000 retail sector websites and found that more than 1,200 (12%) used dark patterns patterns!

The US Federal Trade Commission recently defined dark patterns as “design features used to deceive, steer, or manipulate users into behavior that is profitable for an online service, but often harmful to users or contrary to their intent.” In a recent enforcement action against Age of Learning for its use of dark patterns in connection an online learning tool for kids, the FTC claimed Age of Learning made it unduly difficult for users to cancel recurring subscription fees after initially promising “Easy Cancellation.” 

Similar concerns exist in Europe.  France's data protection authority, CNIL, has also published a report on dark patterns.

Although Section 5 of the FTC Act likely gives the FTC the power to address egregious dark patterns that are fundamentally unfair or deceptive, a legislative response may be coming.  In 2019, a bill was introduced in the US Senate to address these practices, the "Deceptive Experiences To Online Users Reduction (DETOUR) Act". As of today, it's still in committee.

Technology professionals should be aware of dark patterns and take steps to avoid them when designing consumer-facing technologies.  (You don't want to end up in a regulator's crosshairs, nor on Harry Brignell's Dark Patterns Hall of Shame!)



Thursday, October 8, 2020

A New Technology for Web Browser Opt-Outs Could Trigger New Legal Requirements

If your organization has a website directed at California consumers (or US consumers generally), you should start thinking about this soon: A group has developed the technology to honor web browsers’ privacy signals, which could have implications under the California Consumer Privacy Act.

The California Online Privacy Protection Act of 2003 (CalOPPA) was the first broadly-applicable consumer privacy statue in the US. It merely required companies to have website privacy policy statements and to state clearly whether or not they would honor “Do Not Track” signals from a user’s web browser; it did not require companies to honor those signals.

The lawmakers assumed that technology would be created to honor those opt-out signals, and that companies would be pressured by market forces to honor them, but that never really happened.  It has been almost impossible to honor those signals until now, and as a result, almost all US-facing websites have a privacy policy statement that says “we do not recognize browsers’ Do Not Track signals.”

California's Attorney General, which enforces CalOPPA, was not satisfied with industry's failure to develop the technology and honor Do Not Track signals.  When the AG released regulations under the California Consumer Privacy Act, which became enforceable in mid-August, he included a provision that says that companies must honor browsers’ privacy signals as a valid Do Not Sell instruction…even though the technology doesn’t yet exist. T
he AG explained that the requirement "is forward-looking and intended to encourage innovation and the development of technological solutions to facilitate and govern the submission of requests to opt-out." Section 999.315 of the regulations says "[i]f a business collects personal information from consumers online, the business shall treat user-enabled global privacy controls, such as a browser plug-in or privacy setting, device setting, or other mechanism, that communicate or signal the consumer’s choice to opt-out of the sale of their personal information as a valid request submitted pursuant to Civil Code section 1798.120 for that browser or device, or, if known, for the consumer." [emphasis added]

Now, it appears a group of companies and nonprofits, including the Electronic Frontier Foundation and DuckDuckGo have developed the technology, calling it the Global Privacy Control framework.  THe express intent, according to the creators is "to communicate a Do Not Sell request from a global privacy control, as per CCPA-REGULATIONS §999.315." It is already available in beta in certain browser updates or as add-on browser scripts, and consumers will begin sending those signals all over the Internet.  Companies will  be under tremendous pressure to adopt the technology framework and begin honoring the signals quickly.

 It is not yet perfectly clear if and when the GPC would be treated as a legally binding Do-Not-Sell instruction.  Here's why:

  • It is not clear whether the AG had the authority to include this requirement in section 315 of the regulations.  The global privacy control concept is not expressly stated in the CCPA, although the DOJ and Office of Administrative Laws obviously felt the authority was there.  The delegation of authority to the AG in Section 1798.185(a)(7) is broad.
  • Competing frameworks could develop. It is not clear who will decide whether a framework is "official" or "enforceable."  Perhaps a formal endorsement of the California Attorney General is required.  California's Attorney General has informally endorsed the GPC framework via Twitter.
  • Right now the GPC framework is not a finalized standard, according to the website.  It's still being tested.  It is not certain when it would be finalized.

Because the CCPA's definition of "sale" is so broad, and could be interpreted to cover technologies that are ubiquitous across the web (such as third-party advertising cookies), the GPC could affect large numbers of website operators.

Key Point: Companies covered by CCPA should begin thinking now about whether and how to implement this new technical framework. 

Tuesday, July 14, 2020

The Employee Data Dilemma: Should Companies Establish An Employee Privacy Program Now, or Later?

At a time when a global pandemic and economic recession have left many employers in the US cash-strapped, most would probably prefer to defer any investment of time and money in an employee data privacy compliance program.  

Employers with a substantial number of California employees (or contractors) are currently faced with a conundrum: whether to establish an employee data privacy program now, or later.

The California Consumer Privacy Act, as originally written, applied to  personal information about "consumers," but the term "consumer" was so broadly defined that some people speculated that it covered employees and contractors as well as individual customers.  Later in 2019, AB 25 was proposed by Assemblymember Chau to clarify that the intent was not to cover employees, but after objections were raised (and backed by powerful labor unions, I'm told), AB 25 was amended to create a temporary,  partial exclusion from the CCPA until January 1, 2021.  That is the version of AB 25 that passed.  The idea, as I understand it, was that the California legislature would come up with some other way of addressing employee privacy before the end of 2020.  (I wrote about that briefly here.)

We are now halfway through 2020, and the legislature has not yet delivered a solution.  Companies are starting to grow concerned.  Under current law, companies have six months to create an employee data privacy program.  

The California Privacy Rights Act, better known as "CCPA 2.0," is a ballot initiative promoted by the same people behind the CCPA, and it has officially qualified to be on the November ballot in California.  Polling suggests it is highly likely to be approved by voters.  There is one aspect of CPRA that would help companies: It would extend the partial exemption of employee and contractor data for two additional years.

The problem for companies is that we will not know if CPRA has passed until November 3. If it does not pass, it will be too late to do the work required by January 1 (less than two months later). 

Companies must decide whether (a) to take the gamble that CPRA will pass, and defer the work, or (b) to do the work now, even though it likely will not be necessary to comply until January 1, 2023.

(There is a third possibility: Assemblymember Chau has introduced AB 1281, which would push the deadline out by one year to January 1, 2022.  Unfortunately, that bill has not made meaningful progress in the legislature, and currently lingers in committee.  Perhaps, if the CPRA somehow fails, AB 1281 could be enacted rapidly during November or December.)

Based on my informal survey of privacy professionals, it seems many companies are not preparing employee privacy programs, and are simply assuming that CPRA will pass.  (I have not yet seen any actually polling of privacy pros on this question.)  There is certainly a degree of risk in this approach.  Companies with the resources would be best served by preparing now, rather than later.  Companies struggling to survive, however, have a difficult decision to make. 


Saturday, June 20, 2020

What is a "Fellow of Information Privacy"?


Inclusion among the Fellows means that a privacy professional has "demonstrated...comprehensive knowledge of privacy laws, privacy program management and essential data protection practices through... successful completion of credentials... considerable on-the-job experience," and continuing education.  The criteria for consideration are:
  • hold a CIPP designation (US, Canada, Europe or Asia); and
  • hold either a Certified Information Privacy Manager (CIPM) or Certified Information Privacy Technologist (CIPT) certification; and
  • demonstrate three years of work experience focused on data privacy (ISC2, ISACA, IEEE certifications can satisfy one year); and
  • provide peer reviews; and
  • submit a written statement describing one's experience and expertise.
If you're an experienced, credentialed privacy professional, you should consider applying.  (There is no cost.)


Sunday, June 7, 2020

What Is A "Sale" of Personal Information Under the CCPA? No One Knows (Although the California Department of Justice Says It Is Obvious)



In my opinion, the most vexing unresolved question associated with the California Consumer Privacy Act is what the term "sale" means.  Specifically, what "valuable consideration" renders the sharing of personal information a "sale"?  The consequences for businesses are substantial. 

Comment W280-5 dealt with this specifically and politely: "...it would be useful to have the definitions of ‘sale’ and ‘valuable consideration’ clarified."

The Attorney General's regulations and guidance have repeatedly failed to address or explain this, despite pleading from industry. Instead, the California Department of Justice glibly declined on the basis that the meaning is already clear: "There is no need to clarify the definition of 'sale'....   [T]he terms 'valuable' and 'consideration' are reasonably clear and should be understood by the plain meaning of the words."

It is anything but clear.

Even the CCPA's proponents, Californians for Consumer Privacy, know it is unclear.  They have attempted to add clarity in the CPRA proposal, although even that attempt falls far short. 

As a result, we will likely spend enormous time, energy, and money (including taxpayer dollars) in the coming months and years trying to bring clarity to the question. In the meantime, there will be massive uncertainty in several of the industries that Californians and Americans desperately needs to help revive its economy as the COVID-19 pandemic subsides. 
Sale hang tag, red, with word "sale" printed in white
(R) FreeVectors.net

Saturday, May 30, 2020

Are The California Consumer Privacy Act Regulations Delayed?

***Update: The California Attorney General filed the final regulations on July 2nd, late, and requested expedited review so that they could be enforceable within 30 business days (as opposed to October 1 or even the 90 day period provided for by COVID-19 changes to the rules).  After having waited almost two years to finalize rules, and still missing the deadline, the Attorney General now expects companies to be able to react on short notice. ***

It appears the California Consumer Privacy Act regulations will not become effective on July 1st after all.

The latest round of revisions to the regulations, released by the California Department of Justice on March 11, triggered an additional 15 day public comment period, which ended in late March.  Following the comment period, the Department of Justice was required to consider any comments received, and submit the final text to the California Secretary of State so that the California Office of Administrative Law could review them.  (For the handful of legal nerds interested in the minutia, the entire administrative rulemaking process in California is described here.)

Under the ordinary rulemaking process in California, regulations become effective on one of four specific dates, based on when they are filed.  Rules filed between March 1 and May 31 are effective on July 1, but rules filed later (until August 31) are effective on October 1.
image of the California state flag (public domain)
It doesn't look like the California Department of Justice met the May 31 deadline, because the CCPA regulations are not showing up on the OAL's list of regulations under review as of today.  It seems possible that the regulations might not be effective until October 1.  It's also possible that the regulations could have been submitted and will still become effective on July 1. The OAL says that there are exceptions to the normal process "if the agency demonstrates good cause for an earlier effective date."

The statute itself required the California DOJ to promulgate rules by July 1 ("On or before July 1, 2020, the Attorney General shall solicit broad public participation and adopt regulations to further the purposes of this title"), which is also the date on which the DOJ must begin enforcing the CCPA. It now appears that the DOJ will begin enforcing the statutory language on July 1 but will not be able to enforce regulations until October.

[If you have any additional insights on this regulatory issue, please share them with me.]

Wednesday, May 27, 2020

Unlocking Your Next Career Move With Your Face: A New Facial Recognition Technology Law In Maryland For Employers to Consider


I wrote recently about the Artificial Intelligence Video Interview Act in Illinois, the first law of its kind regulating the use of A.I. in evaluating job candidates' interviews.  You can read all about it here.

image of dots and lines superimposed on a face
(c) Freepik
Maryland will have a similar law on October 1, 2020.  There are important differences, however.  Unlike the Illinois AIVIA, the Maryland law is not targeted precisely at the use of A.I. to evaluate interviews.  Instead, it addresses the use of facial recognition (as a biometric identifier for recognition or tracking). 

The new requirements:

The law simply states that job applicants must sign a simple waiver that includes (i) their name, (ii) the date, (iii) their consent to the use of "facial recognition during the interview," and (iv) a recital that the applicant has read the waiver.

You can read the statute here.

My opinion:

In my view, the policy decision here is correct, but the statute is under-inclusive. 
  • The statute is consistent with the general privacy principle of notice and consent at or before the time personal information is collected.  Beyond that, it is clearly the right thing for employers to notify applicants before using facial recognition, as a matter of personal data rights. 
  • While the brevity of the statute is appreciated, it leaves a number of questions unanswered, however, such as the effect of an applicant's refusal to consent.  It is unclear whether a verbal, recorded waiver would suffice.  Is the consent revocable?  The statute also fails to address the use of facial biometric data for purposes other than recognition and tracking (such as to evaluate interviewees' performance).  One presumes these issues will be litigated in the years to come, unless the Maryland legislature enacts a more comprehensive privacy law or at least a biometric data privacy law. (The Maryland Personal Information Privacy Act covers biometric data but only in the context of a data security breach.)
  • A more complete law would have a broader scope (e.g., all biometric data or all personal data) and would address other core rights such as use limitations, retention limitations, and the sale or sharing of a person's biometric data (compare to Texas' biometric data law).  

How to implement:

The waiver form prescribed by the new Maryland law should be very simple to draft (in a matter of minutes). 

Some organizations might consider whether to create or amend a policy statement governing the use of facial recognition data, and if so, whether to incorporate into an existing biometric data policy (perhaps adopted pursuant to Illinois' BIPA) or an existing A.I. policy.




Wednesday, May 20, 2020

There's a New Cop on the Privacy Beat in Canada

Is there a new sheriffMountie in town?

The Canadian Competition Bureau has just made waves by entering into a CA $9.5 million settlement with Facebook over privacy violations in connection with the Cambridge Analytica scandal.  The CB alleged that Facebook told users they could control access to their data, but the data was still accessible by third-party developers.

Canada has national and provincial data protection authorities (the national DPA is the Office of the Privacy Commissioner and provincial DPAs have similar names), but according to a law professor at the University of Ottawa, the OPC's enforcement authorities under the PIPEDA are weak.  The Competition Bureau, by contrast, has much greater enforcement powers, at least by Canadian standards. For large, international organizations like Facebook, a CA $9 million penalty (and $500k in costs) is hardly a deterrent.  By comparison, the US Federal Trade Commission fined Facebook USD $5 billion.  
 
What is to be done about the effete OPC?  The recent Digital Privacy Act, effective in 2018, clearly failed to strengthen the PIPEDA and the OPC sufficiently. (Here is my earlier post about that.) The Canadian government released a Digital Charter in May (2019) to outline proposed PIPEDA reforms.  The tenth and final priority of the Digital Charter is to strengthen enforcement and bring real accountability to bear.  Only time will tell if that happens.

Here you can read the actual settlement terms: https://decisions.ct-tc.gc.ca/ct-tc/cdo/en/item/471812/index.do

And here's a final, amusing link: The Competition Bureau's own Facebook page: https://www.facebook.com/competitionbureaucanada

image of laptop with a red maple leaf sticker

Tuesday, May 19, 2020

Highlights of the 2020 Data Breach Investigations Report by Verizon

There are a number of surveys and studies published each year that provide empirical data about the cybersecurity landscape. One of them is the Verizon Data Breach Investigations Report, which compiles publicly-reported breaches with other sources (including intelligence gathered by the Verizon Threat Research Advisory Center). The 2020 DBIR has just been published. This year, Verizon amassed 157,525 incidents and 108,069 breaches.  Here are some interesting findings:
  • Trojans were once the primary means of hacking, comprising as much as 50% of all breaches in 2016. They now comprise only 6.5%.
  • Phishing and credential theft are on the rise.
  • External attackers continue to be much more common than internal threats.
  • Money continues to be the main motive, far ahead of espionage.
  • The most common hacker profile is the organized crime ring, followed by state-sponsored actors.
  • According to the FBI, hackers are more likely to be close by than around the globe: 85% of victims and perpetrators were in the same country, 56%  in the same state and 35% in the same city.
  • Errors are becoming a more common source of breach.  Physical penetrations remain uncommon, but Misuse, Hacking, Malware and Social have all decreased as a percentage.   The most common type of incident was a DDoS, while the most common breach was Social (phishing). 
  • Errors (i.e., human errors such as mis-configurations, and mistakenly sharing data) saw a significant increase.  Almost half of all Errors were discovered by security researchers (not by an audit or other internal source).
  • Almost all (80%) hacks involved stealing or brute-forcing credentials.  Hackers did not rely as much on finding vulnerabilities or backdoors.  
  • Financially-motivated social engineering keeps growing in popularity year over year.
  • Although we often think of on-prem solutions as more secure than the cloud, cloud-based assets were involved in just 24% of breaches, while on-prem assets represented 70%. 
  • The most common attribute of affected data was that it was "personal" (PII/PI/PD), followed by credentials.
  • When looking for "patterns," Verizon found the most common was that breaches involved web applications. 
  • The Healthcare sector had the most breaches, followed by Finance, then a near-tie among Public, Education, Manufacturing, and Professional sectors.  
  • Retail saw only half as many breaches as the latter sectors.  Most Retail sector attacks involved e-commerce, a trend that grows year over year, and very few attacks involved point of sale (POS), representing a multi-year decline.  In Retail, 99% of breaches were financially-motivated.  Fewer than 20% of the data affected in Retail breaches was "internal" (trade secrets, business deals, etc.); the vast majority were types of personal data or payment information.
  • An organization's size has less relationship to the risk of breach this year than in recent years, probably due to the flight to the cloud, where large and small organizations are similarly vulnerable.  
  • Within the data analyzed, there were more than four times as many breaches in North America as in APAC or EMEA.

You can read all the details for yourself, if you have time, and you can learn even more at these links:


 







Wednesday, May 13, 2020

The Robots in HR: Using Artificial Intelligence to Evaluate Job Candidates

If you haven't heard, the robots are now making decisions about your career.  

That's a bit of hyperbole, at least for now, but artificial intelligence tools are being used increasingly by human resources departments to help evaluate job applicants at all stages of the hiring and promotion process, for example:
  • Even before a candidate applies, an AI candidate sourcing tool might be used to crawl LinkedIn and recommend qualified candidates to recruiting teams. 
  • Many people are aware that AI and other software tools screen resumes to help HR and hiring managers filter out clearly unqualified applicants. (ASIDE: I've even seen well-meaning people recommend that applicants copy and paste all of the keywords in the job posting into their resume in tiny, white font to ensure they are not screened out by software. I don't recommend this tactic; sophisticated tools are unlikely to be so easily tricked and may punish the applicant.) 
  • When applicants are interviewed (for an initial hire or promotion), AI may be used to review video of the interview (in real time or after the fact) to evaluate a candidate's competence and fit.
  • Even after a candidate has been given an offer, AI may help perform background checks.
  • AI may be used to evaluate an employee's ongoing performance (using a myriad of metrics).
image of metallic humanoid robot posing similar to Rodan's The Thinker sculpture
(C) istockphoto
In this post, I want to focus specifically on the use of artificial intelligence to evaluate videos of job interviews, about which fewer people seem to be aware.  Popular media outlets generally began reporting on the emerging trend in late 2018.  Tools like Deep Sense, Stafory (known as "Vera" international and "Wendy" in the US), HiredScore and HireVue use algorithms that capture candidates’ tone of voice, word choice, and facial expressions, including micro-expressions, and other data, and compare those with people who have previously been identified as high performers on the job.

Big Upside Potential:

The benefits of tools like these are that they have the potential to improve efficiency, lower costs, potentially improve quality, and have fewer errors than an exclusively human, manual process.  (Did I mention they save companies time and money?)

Risks Associated With Video Interview AI:

There are, however, risks associated with AI, which has led Gartner to include the risk of AI/Machine Learning Bias in its Top 10 Emerging Risks of Q1 2020. (Subscription required.)

Specifically, the risks involve potential unfairness to individuals.  The tools may be ill-equipped to deal with unusual cases, such as an applicant who is otherwise qualified but has suffered a stroke or has facial scars/injuries that affect their facial expressions.  What are the implications under the Americans with Disabilities Act in that situation? 

The tools can be biased in other ways.  For example, if the applicant has vocal tonality, body language and facial expressions that reflect their national origin, but the pool of successful employees did not include people from a similar national origin, the AI could score the applicant poorly.  To make it even more concrete, imagine that the data pool is drawn from employees native to the U.S., and that data is used to evaluate an applicant from India.  The Indian applicant may have a tendency to shake their head from shoulder to shoulder as an affirmative indication, whereas in North America the same gesture tends to indicate uncertainty, irritation, or indifference.  (Indian-Canadian stand-up comic Russell Peters has a bit about this.) In this way, the AI can encode discrimination into the evaluation process in ways that are inaccurate (harming the potential employer) and unfair (harming the applicant). 



In response to these criticisms, AI firms, and their advocates, point out that humans are probably more biased (consciously or unconsciously) than the AI tools, and that AI can actually serve as a control against the bias of an individual interviewer by providing a consistent scoring across a large number of interviews. Furthermore, advocates for AI point out that the solution to the bias problem is simply to use the tool more.  (Opinion: Ultimately, I think this may indeed be a critical aspect of a long-term solution to implicit bias.)

Legislators in Illinois were not content to wait for AI tools to accumulate enough data to correct their own biases, however.  They enacted the Artificial Intelligence Video Interview Act to mitigate the risk of unfair treatment, making Illinois the first state to regulate the practice, effective 1/1/2020.  (It's not surprising that Illinois is the first state to regulate in this area.  Illinois was also the first state to adopt a specific biometric data law, the Biometric Information Privacy Act, in 2008.)

Five Simple Requirements:

The AIVIA is quite concise, with just five requirements.  An employer must:
  1. notify applicants for Illinois-based positions before their video interviews are analyzed by AI;
  2. explain to applicants how the AI works and what general characteristics will be used to evaluate them;
  3. obtain applicants’ consent to the use of the technology; 
  4. limit the sharing of videos to those whose expertise or technology is necessary for the stated purposes; and 
  5. destroy videos within 30 days of a destruction request from the applicant (including all backup copies and copies held by third parties).
Criticisms of the AIVIA:

A downside to this brevity is the inherent ambiguity. There are a host of unanswered questions.  What must be included in the notice? How detailed must the description of AI be? Must consent be given in writing (or can it be on video, or even implied)? Must an applicant request destruction in writing? Is there an exception to the destruction requirement for other legal retention requirements or litigation holds? All of these questions and more await answers.

In addition, the AIVIA has other flaws, in my opinion.  Briefly, here are the three major shortcomings I see in the legislation:

  1. The AIVIA should regulate activity, not the input.  The AIVIA should be about the practice of using AI, not about the data stored in video format.  The data should be regulated by a broader data protection law with notice, consent, sharing limitations, and retention/destruction rights that apply to all personal information (or sensitive personal information).  
  2. The AIVIA is too narrow.  Analysis of video is just one of several ways in which AI is used to make important decisions about the rights, freedoms and opportunities of individual humans.  For example, AI analysis of audio recordings of interviews would not technically be covered by the AIVIA, and certainly AI analysis of resumes and cover letters is not regulated.  A comprehensive AI law is needed to avoid singling out a narrow use of AI for regulation while leaving other, similarly risky, uses of AI ungoverned.
  3. The AIVIA lacks important substantive protections.  It should include a substantive right to request human intervention or challenge an AI decision, as GDPR Article 22 provides in cases of "automated decision-making," and an audit requirement of some kind.  AI creators need not disclose their proprietary algorithms and the like, but a knowledgeable regulator should have the ability to review the processes for shortcomings that are likely to cause significant unfairness to individuals.    

Other Legal Concerns:


  • The (U.S., federal) Equal Employment Opportunity Commission probably expects employers to establish controls to mitigate the risk of AI bias.  The EEOC held a meeting way back in 2016 to address this issue.   Bloomberg Law reported in November that the EEOC was investigating at least two cases in which AI bias was alleged.  
  • There is also a bill to regulate AI before the U.S. Congress, On April 10, 2019, a group of House Democrats introduced the Algorithmic Accountability Act of 2019, which would regulate AI any “automated decision system” that impacts consumers. It seems unlikely to pass at this point.
  • As mentioned above, the use of AI to evaluate job candidates is likely to be considered "automated decision-making" and/or "profiling" under Article 22 of the GDPR, meaning an organization must (i) give European applicants information about the processing; (ii) provide simple ways for EEA applicants to request human intervention or challenge a decision; and (iii) regularly audit or check to ensure automated systems are working as intended.
  • Given the likelihood that other AI regulation is coming, organizations should think about adopting a common-sense, flexible, principles-based AI policy.  Doing so is probably the right move from a data ethics perspective, and may also help avoid future pain and effort as regulation in this area evolves.  


Sponsored Post: This blog post has been brought to you by the friendly automatons at Skynet.  

Sunday, February 9, 2020

First Take: Summary of Revisions to the CCPA Regulations

On Friday afternoon, and without advanced notice, the California Department of Justice released changes to the California Consumer Privacy Act (CCPA) regulations that will affect how companies all over the world attempt to comply with the CCPA.

Background

Before describing the changes, let's remember how we got here.  The CCPA began as a ballot initiative funded by a wealthy real estate developer.  The ballot measure was so popular that it was certain to pass in the 2018 election, so the California legislature struck a deal with the proponents to make it a statute immediately (which made it easier for the legislature to amend).  It was amended once in 2018 and several times in September 2019.

The law became effective on January 1, 2020. When the CCPA was enacted back in June 2018, it delegated certain rulemaking responsibility to the California Department of Justice, led by the Attorney General.  After fifteen long months--and painfully close to the CCPA's effective date of January 1, 2020 the Attorney General released proposed regulations on October 10, 2019.  Because the Attorney General waited so long to publish regulations, the statute says that the California Department of Justice cannot begin enforcing the law until six months following its effective date--July 1, 2020.  The Attorney General has said, however, that companies are expected to comply on January 1, 2020, and enforcement actions after July 1 might relate to activities taken between January 1, 2020 and July 1, 2020.

CCPA Opt out button image from CCPA Regs
The new, standard CCPA opt-out button
When the regulations we first released on October 10, 2020, there was a public comment period, and many interested people, companies, and groups commented on the proposal.  Many pointed out that the regulations created new burdens not found in the statute, failed to clarify many ambiguities in the statute, and introduced new ambiguities.  The Attorney General, however, said in December that there would be no major changes to the regulations, despite the voluminous comments and criticisms. 

In response to the comment letters, and to clarify certain ambiguities in the regulations, the Department of Justice revised the regulations on Friday, February 7, 2020.  Despite the Attorney General's statement, hardly a paragraph of the original regulations is left intact; all 32 pages of the revisions show significant changes.

The Changes

Many of the changes are merely clarifying edits and do not signal substantive policy changes.  While it will take time to digest and understand the effects of the changes, my quick, initial summary of the more salient changes are listed below:

  • The disclosure of the categories of sources of personal information will be more specific than the three categories originally described. Several additional examples are provided: advertising networks, internet service providers, data analytics providers, operating systems and platforms, social networks.  Businesses will need to revise their public-facing privacy policy statements and disclosures in response.
  • New provisions addressing employee data are included. The term "employment-related information" is added.  Disclosures can be hyperlinked. 
  • More specific instructions for handling household requests are included.  Companies will need to revise their procedures to address this. 
  • The definition of "personal information" is to be interpreted slightly less broadly than some have thought.  For example, even though the statutory definition includes IP addresses, all IP addresses will not be considered personal information.  A consumer's IP address is only considered personal information if it can be linked to the consumer or household. 
  • The WCAG 2.1 (not 2.0!) accessibility standards are incorporated by reference. 
  • The "notice at collection" may be oral.
  • Non-intuitive collection via a mobile device will require a "just-in-time" notice, such as a pop-up window.
  • Businesses may use personal information for additional purposes if they are not "materially different" from the purposes previously disclosed.  This gives businesses a little more flexibility to adjust to new use cases than the previous language.
  • It appears that purposes need not be disclosed for each category of personal information, as originally required.  (Many companies may not need to describe purposes in the granular detail found in privacy policy statements published on January 1.)
  • Companies that collect consumer personal information only indirectly can avoid the notice of collection if they register as data brokers with the California Attorney General. All of the implications of this change are unclear to me at this point, but this could be very significant.
  • Mobile apps can use hyperlinks to privacy policy statements. 
  • A description of the process for authorized agents to demonstrate authority is no longer required to be included in the notice of opt-out rights.
  • The privacy policy statement URL is no longer required to be included in the notice of opt-out rights. 
  • Businesses do not need to commit to never sell personal information in the future in order to avoid opt-out notices, as was perhaps implied by the initial regulatory language.
  • A simple opt-out graphic is included for use in lieu of the hyperlink text for the opt-out mechanism.
  • Businesses must disclose the value of the consumer's data, explain how the financial incentive is related to the value of the consumer's data, when giving a notice of financial incentive.  Businesses will need to revise these notices to comply with the change.  This likely means loyalty and discount program terms and conditions need to include a dollar value for consumer data.  There are also new examples relating specifically to loyalty programs.
  •  Categories of sources of information, purposes for collection, and categories of third parties are no longer required to be disclosed separately for each category of personal information.  (The complex matrices and other highly-granular disclosures that some businesses have already released in response to the proposed regulations now seem unnecessary.  Those companies may want to make more general statements going forward.)
  • The requirement to affirmatively state whether data has been sold in the prior 12 months is removed.
  • The categories of third parties to whom personal information is sold must be disclosed separately for each category of personal information.  
  • Verification processes must be disclosed generally, not specifically, in the notice at collection.
  • The right to opt-out disclosure must state whether or not the company sells personal information.
  • Online-only, direct-to-consumer businesses can limit consumer requests to email; all others must offer two or more options.  The webform is no longer mandatory if a business has a website. 
  • The requirement to accept consumer requests to know and requests to delete via an additional method based on how the business interacts with consumers is now recommended but not mandatory.  
  • Confirmation must be sent within 10 business days, not calendar days.
  • If a business cannot verify identity within 45 days of receiving a request, the business may deny the request.
  • Businesses need not search for personal information if four criteria are met (this will be rare).
  • Business may not disclose certain biometric data in response to a request to know.
  • A response to a request for categories of personal information must include additional information.
  • If identity cannot be verified, a business must ask if the consumer wishes to opt out of the sale of their personal information (for which verification is not required).
  • A business does not have to describe how it deleted the consumer's data pursuant to a deletion request, but must state whether or not it has done so.
  • The revisions say that a business may tell a consumer that it is retaining a record of a deletion request "to ensure the personal information remains deleted from the business's records."  (While re-introduction of data through automated data syncs and dumps is a legitimate concerns, I worry that such a statement could lead a consumer to think that a business has a duty to avoid collecting the consumer's data in the future, or to periodically purge the data in the future.)
  • If prohibited from fulfilling a deletion request by law, the business must now explain the legal conflict. (!)
  • Several changes to the constraints placed on service providers are present, including the ability of service providers to use personal information to improve their own services.  (This was important, especially for AI providers.)
  • It must be "easy" to opt-out and involve "minimal steps." 
  • Additional expectations are set regarding the honoring of browsers' privacy settings and the "opt out" signal.  It is still unclear how this will work in the real world.
  • An authorized agent must have written authority that is signed by the consumer, and the business can require the consumer to confirm directly to the business that the authorized agent has permisison to do so.
  • Statistical disclosure is due on July 1 of each calendar year, for businesses that meet the threshold requirements for reporting.  (I believe July 1, 2021 will be the first reporting deadline.)
  • Household requests require verification of all household members.  (This seems likely to cause most businesses to treat household requests as multiple individual requests, for practical operational purposes.)
  • Verification cannot involve a fee payable by the consumer, even if payable to a third party.  Businesses cannot require notarization for verification unless the business pays for the notarization.  (Some businesses will need to revise their verification processes.)
  • Requests must be denied unless verified in accordance with the regulations (businesses seem to have no discretion). 
  • Authorized agents must use reasonable security procedures and cannot use a consumer's data for additional purposes.
  • Businesses must establish a method to verify that a parent acting on behalf of a child under 13 is the parent (or guradian).
  • If the value of consumer data cannot be calculated or does not relate to the value of a financial incentive, the financial incentive cannot be forfeited in response to a request to delete (unless the incentive is required by federal law).  The value of consumer data can be calculated based on the value to all individuals, not just the business's consumers.  The "typical consumer" concept is removed.
Again, this is just a quick summary after an initial reading of the revisions.  As I (and others) continue to scrutinize the revisions, better understandings and additional insights are likely to emerge, so please stay tuned.

What's Next?

The revisions to the regulation trigger an additional 15 day public comment period, which ends on February 24.  Following the comment period, the Department of Justice will submit the final text to the California Office of Administrative Law, which has 30 business days to review the regulations before they will go into effect.  In other words, the earliest date that the regulations could become effective is early April.  The latest date I can imagine them becoming effective is July 1, when the Department of Justice begins bringing enforcement actions against companies for violations.

If you would like to read the revised regulations for yourself, you can find them here. The notice is here.


Thursday, January 2, 2020

Was 2019 the “Year of Privacy” in the U.S.? (Or Will It Be 2020?)



What a year it has been! As one year closes and another begins, let us take a moment to reflect on the significance of 2019. It may not be an exaggeration to say that 2019 brought some of the most important changes in privacy and data security law that most of us have seen in our professional careers.


Yet, with all the momentum toward heightened consumer data protection, there remain conspicuous absences: Congress again considered, and again failed to deliver, a comprehensive privacy and data security bill. The North Carolina General Assembly declined to meaningfully revise the State’s core privacy and cybersecurity statute (the Identity Theft Protection Act or ITPA); House Bill 904, the most recent incarnation of Representative Jason Saine’s and Attorney General Josh Stein’s bipartisan update to the ITPA, languishes in the General Assembly. The General Assembly did, however, approve some modest updates to the data security laws affecting North Carolina government entities, in HB 217/SL 2019-200, giving the State Chief Information Officer greater oversight of State agencies’ cybersecurity controls.



Other states were more successful in modernizing privacy and data security laws in 2019. Forty-three states (and Puerto Rico) considered more than 300 proposed changes to privacy and cybersecurity laws in 2019, ultimately enacting 31 statutes. Although they cannot all be described in detail in this post, most have at least one of the following aims:

  • requiring government agencies or businesses to implement training or specific types of security policies and practices;
  • creating task forces or commissions;
  • restructuring government for improved security;
  • studying the use of blockchain for cybersecurity;
  • providing for the security of utilities and critical infrastructure;
  • exempting cybersecurity operations information from public records laws;
  • addressing the security of connected devices (the Internet of Things);
  • regulating cybersecurity within the insurance industry;
  • providing funding for improved security measures; and
  • cybersecurity threats to elections.1

One state law, of course, stands out from among all others. Throughout 2019, the California Consumer Privacy Act (CCPA) dominated the headlines (as well as the thoughts, dreams and nightmares of privacy and data security lawyers). The CCPA is driving a fundamental shift in the way we think about data protection in the United States, forcing companies to carefully contemplate the personal data they collect, hold, use, and share. Though it lacks the aggressive extraterritorial reach of Europe’s General Data Protection Regulation, the CCPA will apply to many companies throughout the United States and around the world, including many North Carolina-based businesses. Though enacted in 2018, the CCPA was amended, and proposed regulations were released, in late 2019; and with a January 1, 2020 effective date, most practitioners were intensely focused on the CCPA throughout 2019.


Even as 2020 arrives, companies are still wrestling with many patent and latent ambiguities in the CCPA and its proposed regulations. In fact, many have argued that the California Attorney General’s proposed regulations added to the ambiguities rather than reducing them. The regulations are expected to become final very soon, and the Attorney General stated publicly, that the final regs are not expected to differ substantially from the proposed regs—in other words, the final regulations are unlikely to offer new answers.


Because the California Attorney General’s proposed regulations were released so late, and were not made official by the statutory effective date of January 1, the Attorney General is delaying enforcement of the regulations by six months—until July 1. However, his office intends to take action on violations of the statute that occur between January 1 and July 1, and plaintiffs could bring claims under CCPA beginning January 1. Accordingly, most companies would prefer to achieve compliance sooner rather than later. That may be easier said than done. Even companies that have been actively pursuing compliance since 2018 were forced to pivot due to the various amendments passed in September 2019 (and the failure of some amendments to pass) and the new requirements imposed by the proposed regulations released in October 2019, putting them far behind schedule. According to a survey conducted by the International Association of Privacy Professionals in April 2019, one-quarter of companies were targeting compliance by July 1 (the enforcement date), rather than January 1 (the effective date); in a subsequent survey this summer, the number had grown to one-third. My suspicion is that a majority of companies subject to the CCPA are now targeting a July 1 compliance date, in light of the many new and different requirements and uncertainties arising from the amendments and regulations.


As dramatic as 2019 has been for privacy and data security law, 2020 may be even more eventful. We can be fairly certain that plaintiffs will bring actions under CCPA and other laws; the Federal Trade Commission, state Attorneys General, and other domestic authorities will bring enforcement actions; states and municipalities will continue to enact divergent data protection laws, further complicating the domestic legal landscape; and foreign nations will continue to adopt data protection laws, largely drawing upon common principles found in the GDPR and its predecessors. On top of all of this, the creator of the CCPA, Alastair Mactaggart, is already advancing a so-called “CCPA 2.0” to tighten the requirements and strengthen enforcement. It is an exciting (and sometimes frightening) time to be a privacy and data security lawyer. I look forward to navigating these uncharted waters along with you in 2020! 



[This blog post is re-posted from the North Carolina Bar Association.]