Sunday, December 6, 2020

Beware of Dark Patterns in Online UI/UX


Have you ever found yourself spending countless hours unintentionally falling down an internet rabbit hole? Have you noticed it is often remarkably easy to sign up for a free trial, but found the experience of unsubscribing Kafkaesque?  The manipulation tactics behind these experiences are known as "dark patterns," and they are emerging as a new area of legal regulation.

Relativity, by M.C. Escher, 1953

Technology companies now employ specialized professionals to manage the users' experience (UI/UX), and an many ways the tactics they employ can be quite helpful. When UI/UX engineering goes awry, as millions have recently learned thanks to the popular documentary film The Social Dilemma, users can suffer. 

The Electronic Frontier Foundation describes  dark patterns as the opposite of Privacy by Design, and worries the patterns will have dire consequences for personal privacy. When technology is designed to manipulate, obfuscate, or misdirect, one of the first harms the user is likely to experience is a loss of control over their personal information.  Financial and other harms are also likely.

Though common on social platforms, dark patterns are also found outside of social media. I am particularly interested in the retail sector these days, and a recent Princeton University study examined more than 10,000 retail sector websites and found that more than 1,200 (12%) used dark patterns patterns!

The US Federal Trade Commission recently defined dark patterns as “design features used to deceive, steer, or manipulate users into behavior that is profitable for an online service, but often harmful to users or contrary to their intent.” In a recent enforcement action against Age of Learning for its use of dark patterns in connection an online learning tool for kids, the FTC claimed Age of Learning made it unduly difficult for users to cancel recurring subscription fees after initially promising “Easy Cancellation.” 

Similar concerns exist in Europe.  France's data protection authority, CNIL, has also published a report on dark patterns.

Although Section 5 of the FTC Act likely gives the FTC the power to address egregious dark patterns that are fundamentally unfair or deceptive, a legislative response may be coming.  In 2019, a bill was introduced in the US Senate to address these practices, the "Deceptive Experiences To Online Users Reduction (DETOUR) Act". As of today, it's still in committee.

Technology professionals should be aware of dark patterns and take steps to avoid them when designing consumer-facing technologies.  (You don't want to end up in a regulator's crosshairs, nor on Harry Brignell's Dark Patterns Hall of Shame!)



Thursday, October 8, 2020

A New Technology for Web Browser Opt-Outs Could Trigger New Legal Requirements

If your organization has a website directed at California consumers (or US consumers generally), you should start thinking about this soon: A group has developed the technology to honor web browsers’ privacy signals, which could have implications under the California Consumer Privacy Act.

The California Online Privacy Protection Act of 2003 (CalOPPA) was the first broadly-applicable consumer privacy statue in the US. It merely required companies to have website privacy policy statements and to state clearly whether or not they would honor “Do Not Track” signals from a user’s web browser; it did not require companies to honor those signals.

The lawmakers assumed that technology would be created to honor those opt-out signals, and that companies would be pressured by market forces to honor them, but that never really happened.  It has been almost impossible to honor those signals until now, and as a result, almost all US-facing websites have a privacy policy statement that says “we do not recognize browsers’ Do Not Track signals.”

California's Attorney General, which enforces CalOPPA, was not satisfied with industry's failure to develop the technology and honor Do Not Track signals.  When the AG released regulations under the California Consumer Privacy Act, which became enforceable in mid-August, he included a provision that says that companies must honor browsers’ privacy signals as a valid Do Not Sell instruction…even though the technology doesn’t yet exist. T
he AG explained that the requirement "is forward-looking and intended to encourage innovation and the development of technological solutions to facilitate and govern the submission of requests to opt-out." Section 999.315 of the regulations says "[i]f a business collects personal information from consumers online, the business shall treat user-enabled global privacy controls, such as a browser plug-in or privacy setting, device setting, or other mechanism, that communicate or signal the consumer’s choice to opt-out of the sale of their personal information as a valid request submitted pursuant to Civil Code section 1798.120 for that browser or device, or, if known, for the consumer." [emphasis added]

Now, it appears a group of companies and nonprofits, including the Electronic Frontier Foundation and DuckDuckGo have developed the technology, calling it the Global Privacy Control framework.  THe express intent, according to the creators is "to communicate a Do Not Sell request from a global privacy control, as per CCPA-REGULATIONS §999.315." It is already available in beta in certain browser updates or as add-on browser scripts, and consumers will begin sending those signals all over the Internet.  Companies will  be under tremendous pressure to adopt the technology framework and begin honoring the signals quickly.

 It is not yet perfectly clear if and when the GPC would be treated as a legally binding Do-Not-Sell instruction.  Here's why:

  • It is not clear whether the AG had the authority to include this requirement in section 315 of the regulations.  The global privacy control concept is not expressly stated in the CCPA, although the DOJ and Office of Administrative Laws obviously felt the authority was there.  The delegation of authority to the AG in Section 1798.185(a)(7) is broad.
  • Competing frameworks could develop. It is not clear who will decide whether a framework is "official" or "enforceable."  Perhaps a formal endorsement of the California Attorney General is required.  California's Attorney General has informally endorsed the GPC framework via Twitter.
  • Right now the GPC framework is not a finalized standard, according to the website.  It's still being tested.  It is not certain when it would be finalized.

Because the CCPA's definition of "sale" is so broad, and could be interpreted to cover technologies that are ubiquitous across the web (such as third-party advertising cookies), the GPC could affect large numbers of website operators.

Key Point: Companies covered by CCPA should begin thinking now about whether and how to implement this new technical framework. 

Tuesday, July 14, 2020

The Employee Data Dilemma: Should Companies Establish An Employee Privacy Program Now, or Later?

At a time when a global pandemic and economic recession have left many employers in the US cash-strapped, most would probably prefer to defer any investment of time and money in an employee data privacy compliance program.  

Employers with a substantial number of California employees (or contractors) are currently faced with a conundrum: whether to establish an employee data privacy program now, or later.

The California Consumer Privacy Act, as originally written, applied to  personal information about "consumers," but the term "consumer" was so broadly defined that some people speculated that it covered employees and contractors as well as individual customers.  Later in 2019, AB 25 was proposed by Assemblymember Chau to clarify that the intent was not to cover employees, but after objections were raised (and backed by powerful labor unions, I'm told), AB 25 was amended to create a temporary,  partial exclusion from the CCPA until January 1, 2021.  That is the version of AB 25 that passed.  The idea, as I understand it, was that the California legislature would come up with some other way of addressing employee privacy before the end of 2020.  (I wrote about that briefly here.)

We are now halfway through 2020, and the legislature has not yet delivered a solution.  Companies are starting to grow concerned.  Under current law, companies have six months to create an employee data privacy program.  

The California Privacy Rights Act, better known as "CCPA 2.0," is a ballot initiative promoted by the same people behind the CCPA, and it has officially qualified to be on the November ballot in California.  Polling suggests it is highly likely to be approved by voters.  There is one aspect of CPRA that would help companies: It would extend the partial exemption of employee and contractor data for two additional years.

The problem for companies is that we will not know if CPRA has passed until November 3. If it does not pass, it will be too late to do the work required by January 1 (less than two months later). 

Companies must decide whether (a) to take the gamble that CPRA will pass, and defer the work, or (b) to do the work now, even though it likely will not be necessary to comply until January 1, 2023.

(There is a third possibility: Assemblymember Chau has introduced AB 1281, which would push the deadline out by one year to January 1, 2022.  Unfortunately, that bill has not made meaningful progress in the legislature, and currently lingers in committee.  Perhaps, if the CPRA somehow fails, AB 1281 could be enacted rapidly during November or December.)

Based on my informal survey of privacy professionals, it seems many companies are not preparing employee privacy programs, and are simply assuming that CPRA will pass.  (I have not yet seen any actually polling of privacy pros on this question.)  There is certainly a degree of risk in this approach.  Companies with the resources would be best served by preparing now, rather than later.  Companies struggling to survive, however, have a difficult decision to make. 


Saturday, June 20, 2020

What is a "Fellow of Information Privacy"?


Inclusion among the Fellows means that a privacy professional has "demonstrated...comprehensive knowledge of privacy laws, privacy program management and essential data protection practices through... successful completion of credentials... considerable on-the-job experience," and continuing education.  The criteria for consideration are:
  • hold a CIPP designation (US, Canada, Europe or Asia); and
  • hold either a Certified Information Privacy Manager (CIPM) or Certified Information Privacy Technologist (CIPT) certification; and
  • demonstrate three years of work experience focused on data privacy (ISC2, ISACA, IEEE certifications can satisfy one year); and
  • provide peer reviews; and
  • submit a written statement describing one's experience and expertise.
If you're an experienced, credentialed privacy professional, you should consider applying.  (There is no cost.)


Sunday, June 7, 2020

What Is A "Sale" of Personal Information Under the CCPA? No One Knows (Although the California Department of Justice Says It Is Obvious)



In my opinion, the most vexing unresolved question associated with the California Consumer Privacy Act is what the term "sale" means.  Specifically, what "valuable consideration" renders the sharing of personal information a "sale"?  The consequences for businesses are substantial. 

Comment W280-5 dealt with this specifically and politely: "...it would be useful to have the definitions of ‘sale’ and ‘valuable consideration’ clarified."

The Attorney General's regulations and guidance have repeatedly failed to address or explain this, despite pleading from industry. Instead, the California Department of Justice glibly declined on the basis that the meaning is already clear: "There is no need to clarify the definition of 'sale'....   [T]he terms 'valuable' and 'consideration' are reasonably clear and should be understood by the plain meaning of the words."

It is anything but clear.

Even the CCPA's proponents, Californians for Consumer Privacy, know it is unclear.  They have attempted to add clarity in the CPRA proposal, although even that attempt falls far short. 

As a result, we will likely spend enormous time, energy, and money (including taxpayer dollars) in the coming months and years trying to bring clarity to the question. In the meantime, there will be massive uncertainty in several of the industries that Californians and Americans desperately needs to help revive its economy as the COVID-19 pandemic subsides. 
Sale hang tag, red, with word "sale" printed in white
(R) FreeVectors.net

Saturday, May 30, 2020

Are The California Consumer Privacy Act Regulations Delayed?

***Update: The California Attorney General filed the final regulations on July 2nd, late, and requested expedited review so that they could be enforceable within 30 business days (as opposed to October 1 or even the 90 day period provided for by COVID-19 changes to the rules).  After having waited almost two years to finalize rules, and still missing the deadline, the Attorney General now expects companies to be able to react on short notice. ***

It appears the California Consumer Privacy Act regulations will not become effective on July 1st after all.

The latest round of revisions to the regulations, released by the California Department of Justice on March 11, triggered an additional 15 day public comment period, which ended in late March.  Following the comment period, the Department of Justice was required to consider any comments received, and submit the final text to the California Secretary of State so that the California Office of Administrative Law could review them.  (For the handful of legal nerds interested in the minutia, the entire administrative rulemaking process in California is described here.)

Under the ordinary rulemaking process in California, regulations become effective on one of four specific dates, based on when they are filed.  Rules filed between March 1 and May 31 are effective on July 1, but rules filed later (until August 31) are effective on October 1.
image of the California state flag (public domain)
It doesn't look like the California Department of Justice met the May 31 deadline, because the CCPA regulations are not showing up on the OAL's list of regulations under review as of today.  It seems possible that the regulations might not be effective until October 1.  It's also possible that the regulations could have been submitted and will still become effective on July 1. The OAL says that there are exceptions to the normal process "if the agency demonstrates good cause for an earlier effective date."

The statute itself required the California DOJ to promulgate rules by July 1 ("On or before July 1, 2020, the Attorney General shall solicit broad public participation and adopt regulations to further the purposes of this title"), which is also the date on which the DOJ must begin enforcing the CCPA. It now appears that the DOJ will begin enforcing the statutory language on July 1 but will not be able to enforce regulations until October.

[If you have any additional insights on this regulatory issue, please share them with me.]

Wednesday, May 27, 2020

Unlocking Your Next Career Move With Your Face: A New Facial Recognition Technology Law In Maryland For Employers to Consider


I wrote recently about the Artificial Intelligence Video Interview Act in Illinois, the first law of its kind regulating the use of A.I. in evaluating job candidates' interviews.  You can read all about it here.

image of dots and lines superimposed on a face
(c) Freepik
Maryland will have a similar law on October 1, 2020.  There are important differences, however.  Unlike the Illinois AIVIA, the Maryland law is not targeted precisely at the use of A.I. to evaluate interviews.  Instead, it addresses the use of facial recognition (as a biometric identifier for recognition or tracking). 

The new requirements:

The law simply states that job applicants must sign a simple waiver that includes (i) their name, (ii) the date, (iii) their consent to the use of "facial recognition during the interview," and (iv) a recital that the applicant has read the waiver.

You can read the statute here.

My opinion:

In my view, the policy decision here is correct, but the statute is under-inclusive. 
  • The statute is consistent with the general privacy principle of notice and consent at or before the time personal information is collected.  Beyond that, it is clearly the right thing for employers to notify applicants before using facial recognition, as a matter of personal data rights. 
  • While the brevity of the statute is appreciated, it leaves a number of questions unanswered, however, such as the effect of an applicant's refusal to consent.  It is unclear whether a verbal, recorded waiver would suffice.  Is the consent revocable?  The statute also fails to address the use of facial biometric data for purposes other than recognition and tracking (such as to evaluate interviewees' performance).  One presumes these issues will be litigated in the years to come, unless the Maryland legislature enacts a more comprehensive privacy law or at least a biometric data privacy law. (The Maryland Personal Information Privacy Act covers biometric data but only in the context of a data security breach.)
  • A more complete law would have a broader scope (e.g., all biometric data or all personal data) and would address other core rights such as use limitations, retention limitations, and the sale or sharing of a person's biometric data (compare to Texas' biometric data law).  

How to implement:

The waiver form prescribed by the new Maryland law should be very simple to draft (in a matter of minutes). 

Some organizations might consider whether to create or amend a policy statement governing the use of facial recognition data, and if so, whether to incorporate into an existing biometric data policy (perhaps adopted pursuant to Illinois' BIPA) or an existing A.I. policy.




Wednesday, May 20, 2020

There's a New Cop on the Privacy Beat in Canada

Is there a new sheriffMountie in town?

The Canadian Competition Bureau has just made waves by entering into a CA $9.5 million settlement with Facebook over privacy violations in connection with the Cambridge Analytica scandal.  The CB alleged that Facebook told users they could control access to their data, but the data was still accessible by third-party developers.

Canada has national and provincial data protection authorities (the national DPA is the Office of the Privacy Commissioner and provincial DPAs have similar names), but according to a law professor at the University of Ottawa, the OPC's enforcement authorities under the PIPEDA are weak.  The Competition Bureau, by contrast, has much greater enforcement powers, at least by Canadian standards. For large, international organizations like Facebook, a CA $9 million penalty (and $500k in costs) is hardly a deterrent.  By comparison, the US Federal Trade Commission fined Facebook USD $5 billion.  
 
What is to be done about the effete OPC?  The recent Digital Privacy Act, effective in 2018, clearly failed to strengthen the PIPEDA and the OPC sufficiently. (Here is my earlier post about that.) The Canadian government released a Digital Charter in May (2019) to outline proposed PIPEDA reforms.  The tenth and final priority of the Digital Charter is to strengthen enforcement and bring real accountability to bear.  Only time will tell if that happens.

Here you can read the actual settlement terms: https://decisions.ct-tc.gc.ca/ct-tc/cdo/en/item/471812/index.do

And here's a final, amusing link: The Competition Bureau's own Facebook page: https://www.facebook.com/competitionbureaucanada

image of laptop with a red maple leaf sticker

Tuesday, May 19, 2020

Highlights of the 2020 Data Breach Investigations Report by Verizon

There are a number of surveys and studies published each year that provide empirical data about the cybersecurity landscape. One of them is the Verizon Data Breach Investigations Report, which compiles publicly-reported breaches with other sources (including intelligence gathered by the Verizon Threat Research Advisory Center). The 2020 DBIR has just been published. This year, Verizon amassed 157,525 incidents and 108,069 breaches.  Here are some interesting findings:
  • Trojans were once the primary means of hacking, comprising as much as 50% of all breaches in 2016. They now comprise only 6.5%.
  • Phishing and credential theft are on the rise.
  • External attackers continue to be much more common than internal threats.
  • Money continues to be the main motive, far ahead of espionage.
  • The most common hacker profile is the organized crime ring, followed by state-sponsored actors.
  • According to the FBI, hackers are more likely to be close by than around the globe: 85% of victims and perpetrators were in the same country, 56%  in the same state and 35% in the same city.
  • Errors are becoming a more common source of breach.  Physical penetrations remain uncommon, but Misuse, Hacking, Malware and Social have all decreased as a percentage.   The most common type of incident was a DDoS, while the most common breach was Social (phishing). 
  • Errors (i.e., human errors such as mis-configurations, and mistakenly sharing data) saw a significant increase.  Almost half of all Errors were discovered by security researchers (not by an audit or other internal source).
  • Almost all (80%) hacks involved stealing or brute-forcing credentials.  Hackers did not rely as much on finding vulnerabilities or backdoors.  
  • Financially-motivated social engineering keeps growing in popularity year over year.
  • Although we often think of on-prem solutions as more secure than the cloud, cloud-based assets were involved in just 24% of breaches, while on-prem assets represented 70%. 
  • The most common attribute of affected data was that it was "personal" (PII/PI/PD), followed by credentials.
  • When looking for "patterns," Verizon found the most common was that breaches involved web applications. 
  • The Healthcare sector had the most breaches, followed by Finance, then a near-tie among Public, Education, Manufacturing, and Professional sectors.  
  • Retail saw only half as many breaches as the latter sectors.  Most Retail sector attacks involved e-commerce, a trend that grows year over year, and very few attacks involved point of sale (POS), representing a multi-year decline.  In Retail, 99% of breaches were financially-motivated.  Fewer than 20% of the data affected in Retail breaches was "internal" (trade secrets, business deals, etc.); the vast majority were types of personal data or payment information.
  • An organization's size has less relationship to the risk of breach this year than in recent years, probably due to the flight to the cloud, where large and small organizations are similarly vulnerable.  
  • Within the data analyzed, there were more than four times as many breaches in North America as in APAC or EMEA.

You can read all the details for yourself, if you have time, and you can learn even more at these links:


 







Wednesday, May 13, 2020

The Robots in HR: Using Artificial Intelligence to Evaluate Job Candidates

If you haven't heard, the robots are now making decisions about your career.  

That's a bit of hyperbole, at least for now, but artificial intelligence tools are being used increasingly by human resources departments to help evaluate job applicants at all stages of the hiring and promotion process, for example:
  • Even before a candidate applies, an AI candidate sourcing tool might be used to crawl LinkedIn and recommend qualified candidates to recruiting teams. 
  • Many people are aware that AI and other software tools screen resumes to help HR and hiring managers filter out clearly unqualified applicants. (ASIDE: I've even seen well-meaning people recommend that applicants copy and paste all of the keywords in the job posting into their resume in tiny, white font to ensure they are not screened out by software. I don't recommend this tactic; sophisticated tools are unlikely to be so easily tricked and may punish the applicant.) 
  • When applicants are interviewed (for an initial hire or promotion), AI may be used to review video of the interview (in real time or after the fact) to evaluate a candidate's competence and fit.
  • Even after a candidate has been given an offer, AI may help perform background checks.
  • AI may be used to evaluate an employee's ongoing performance (using a myriad of metrics).
image of metallic humanoid robot posing similar to Rodan's The Thinker sculpture
(C) istockphoto
In this post, I want to focus specifically on the use of artificial intelligence to evaluate videos of job interviews, about which fewer people seem to be aware.  Popular media outlets generally began reporting on the emerging trend in late 2018.  Tools like Deep Sense, Stafory (known as "Vera" international and "Wendy" in the US), HiredScore and HireVue use algorithms that capture candidates’ tone of voice, word choice, and facial expressions, including micro-expressions, and other data, and compare those with people who have previously been identified as high performers on the job.

Big Upside Potential:

The benefits of tools like these are that they have the potential to improve efficiency, lower costs, potentially improve quality, and have fewer errors than an exclusively human, manual process.  (Did I mention they save companies time and money?)

Risks Associated With Video Interview AI:

There are, however, risks associated with AI, which has led Gartner to include the risk of AI/Machine Learning Bias in its Top 10 Emerging Risks of Q1 2020. (Subscription required.)

Specifically, the risks involve potential unfairness to individuals.  The tools may be ill-equipped to deal with unusual cases, such as an applicant who is otherwise qualified but has suffered a stroke or has facial scars/injuries that affect their facial expressions.  What are the implications under the Americans with Disabilities Act in that situation? 

The tools can be biased in other ways.  For example, if the applicant has vocal tonality, body language and facial expressions that reflect their national origin, but the pool of successful employees did not include people from a similar national origin, the AI could score the applicant poorly.  To make it even more concrete, imagine that the data pool is drawn from employees native to the U.S., and that data is used to evaluate an applicant from India.  The Indian applicant may have a tendency to shake their head from shoulder to shoulder as an affirmative indication, whereas in North America the same gesture tends to indicate uncertainty, irritation, or indifference.  (Indian-Canadian stand-up comic Russell Peters has a bit about this.) In this way, the AI can encode discrimination into the evaluation process in ways that are inaccurate (harming the potential employer) and unfair (harming the applicant). 



In response to these criticisms, AI firms, and their advocates, point out that humans are probably more biased (consciously or unconsciously) than the AI tools, and that AI can actually serve as a control against the bias of an individual interviewer by providing a consistent scoring across a large number of interviews. Furthermore, advocates for AI point out that the solution to the bias problem is simply to use the tool more.  (Opinion: Ultimately, I think this may indeed be a critical aspect of a long-term solution to implicit bias.)

Legislators in Illinois were not content to wait for AI tools to accumulate enough data to correct their own biases, however.  They enacted the Artificial Intelligence Video Interview Act to mitigate the risk of unfair treatment, making Illinois the first state to regulate the practice, effective 1/1/2020.  (It's not surprising that Illinois is the first state to regulate in this area.  Illinois was also the first state to adopt a specific biometric data law, the Biometric Information Privacy Act, in 2008.)

Five Simple Requirements:

The AIVIA is quite concise, with just five requirements.  An employer must:
  1. notify applicants for Illinois-based positions before their video interviews are analyzed by AI;
  2. explain to applicants how the AI works and what general characteristics will be used to evaluate them;
  3. obtain applicants’ consent to the use of the technology; 
  4. limit the sharing of videos to those whose expertise or technology is necessary for the stated purposes; and 
  5. destroy videos within 30 days of a destruction request from the applicant (including all backup copies and copies held by third parties).
Criticisms of the AIVIA:

A downside to this brevity is the inherent ambiguity. There are a host of unanswered questions.  What must be included in the notice? How detailed must the description of AI be? Must consent be given in writing (or can it be on video, or even implied)? Must an applicant request destruction in writing? Is there an exception to the destruction requirement for other legal retention requirements or litigation holds? All of these questions and more await answers.

In addition, the AIVIA has other flaws, in my opinion.  Briefly, here are the three major shortcomings I see in the legislation:

  1. The AIVIA should regulate activity, not the input.  The AIVIA should be about the practice of using AI, not about the data stored in video format.  The data should be regulated by a broader data protection law with notice, consent, sharing limitations, and retention/destruction rights that apply to all personal information (or sensitive personal information).  
  2. The AIVIA is too narrow.  Analysis of video is just one of several ways in which AI is used to make important decisions about the rights, freedoms and opportunities of individual humans.  For example, AI analysis of audio recordings of interviews would not technically be covered by the AIVIA, and certainly AI analysis of resumes and cover letters is not regulated.  A comprehensive AI law is needed to avoid singling out a narrow use of AI for regulation while leaving other, similarly risky, uses of AI ungoverned.
  3. The AIVIA lacks important substantive protections.  It should include a substantive right to request human intervention or challenge an AI decision, as GDPR Article 22 provides in cases of "automated decision-making," and an audit requirement of some kind.  AI creators need not disclose their proprietary algorithms and the like, but a knowledgeable regulator should have the ability to review the processes for shortcomings that are likely to cause significant unfairness to individuals.    

Other Legal Concerns:


  • The (U.S., federal) Equal Employment Opportunity Commission probably expects employers to establish controls to mitigate the risk of AI bias.  The EEOC held a meeting way back in 2016 to address this issue.   Bloomberg Law reported in November that the EEOC was investigating at least two cases in which AI bias was alleged.  
  • There is also a bill to regulate AI before the U.S. Congress, On April 10, 2019, a group of House Democrats introduced the Algorithmic Accountability Act of 2019, which would regulate AI any “automated decision system” that impacts consumers. It seems unlikely to pass at this point.
  • As mentioned above, the use of AI to evaluate job candidates is likely to be considered "automated decision-making" and/or "profiling" under Article 22 of the GDPR, meaning an organization must (i) give European applicants information about the processing; (ii) provide simple ways for EEA applicants to request human intervention or challenge a decision; and (iii) regularly audit or check to ensure automated systems are working as intended.
  • Given the likelihood that other AI regulation is coming, organizations should think about adopting a common-sense, flexible, principles-based AI policy.  Doing so is probably the right move from a data ethics perspective, and may also help avoid future pain and effort as regulation in this area evolves.  


Sponsored Post: This blog post has been brought to you by the friendly automatons at Skynet.