Saturday, December 17, 2016

The FCC Creates Privacy, Data Protection, and Data Breach Rules for Internet Service Providers

Image of Federal Communications Commission Seal

The Federal Communications Commission is venturing into new areas of privacy regulation.  By a narrow vote, the FCC has approved new rules that govern how internet service providers ("ISPs") use consumers' information.


ISPs long ago realized that customer data is valuable, and are continuing to develop ways to monetize that information.  For example, last month, AT&T explained that a major factor in its decision to bid on Time Warner was the lure of new possibilities in targeted advertising.  Last year, Comcast bought targeted advertising firm Visible World for similar reasons.


Efforts by ISPs to monetize user data have triggered concerns among privacy watchdogs and the FCC.  On October 27, 2016, the FCC adopted new rules to control when and how this information can be used and shared.  "It's the consumers' information.  How it is used should be the consumers' choice" said FCC Chairman Tom Wheeler. 


According to the FCC, the rules "do not prohibit ISPs from using or sharing their customers’ information – they simply require ISPs to put their customers into the driver’s seat when it comes to those decisions.”  The new rules require specific notices to consumers about:

  • The types of information the ISP collects from them

  • How the ISP uses and shares the information

  • The types of entities with whom the ISP shares the information

The rules also require ISPs to give a degree of control to the consumer.  ISPs will be required to obtain consumer consent (an "opt-in") before sharing certain categories of "sensitive" information, including:

  • Health information

  • Financial information

  • Geo-location

  • Children’s information

  • Social Security numbers

  • Web browsing history

  • App usage history

  • Content of communications

For other categories of information (those not deemed “sensitive," such as an email address or service level), ISPs must still offer users the opportunity to “opt-out” of the use and sharing of their information, with some exceptions.  Customer consent can be inferred for certain uses, such as providing services and for billing and collection activities.


ISPs are prohibited from rejecting a customer for refusing to provide a requested consent.  Because it is more profitable for the ISP if the customers permit data use and sharing, the rules permit an ISP to give customers a discount or other financial incentive to provide a requested consent.


The FCC has made it clear that its rules “do not regulate the privacy practices of websites or apps, like Twitter or Facebook, over which the FTC has authority.”  Websites and apps currently collect much more data than ISPs, so the practical impact of the rules on consumer privacy is likely to be limited.


The new rules impose a requirement that ISPs implement reasonable data security practices, including robust customer authentication and data disposal practices.  The rules also include a data breach notification requirement, which preempts those in existence in 47 states, but only to the extent that the FCC rules are inconsistent with a state's requirements.   


The rules become effective with respect to different sections at different times, with all of the rules likely becoming enforceable within one year. 


This action by the FCC creates just one more piece in the mosaic of statues, regulations, and treaties that together comprise privacy and data security law. 

Sunday, November 20, 2016

"Cyber Safeguards and Procedures" for Law Firms

I recently spoke about information security issues to a group of approximately 175 attorneys in the Triad, at a continuing legal education event sponsored by Lawyers Mutual.  The session was titled "Cyber Safeguards and Procedures" and focused on data security risks faced by law firms and how they can mitigate those risks.  I was joined by Lawyers Mutual claims attorney Troy Crawford.  If you would like a copy of the slides from this presentation, please email me: email Matt

Sunday, October 16, 2016

A Few Thoughts on Selecting a HIPAA Privacy and Security Officer

Perhaps your organization is becoming a HIPAA covered entity or a business associate for the first time, and you now understand that your organization will have to comply with HIPAA. One of your first, and most important, tasks will be to designate a Privacy Officer and Security Officer.  This post describes some considerations you should think through when making this decision.

One person or two?
The HIPAA Privacy Rule requires a privacy officer be designated and the HIPAA Security Rule each requires a security officer be designated.  It is legally permissible to have on person designated as both, or split the roles. You'll need to decide whether to combine or bifurcate these roles.  

First, you need to decide whether you have one person within your organization who has the capabilities required for both roles.  The Privacy Officer is responsible for understanding who is allowed to access protected health information (PHI), and will need to answer questions about practices, address requests for information, and handle training and monitoring of other staff. The Security Officer is primarily focused on protecting electronic protected health information (ePHI) from unauthorized access (e.g., meeting encryption requirements, etc.). If the person you would prefer to designate as the Privacy/Security Officer does not have an understanding of the technological aspects of protecting ePHI, there are two solutions: (a) designate someone with the technological understanding to be the Security Officer, or (b) instruct someone with the technological understanding (either inside or outside of the organization) to assist the Privacy/Security Officer.

What is most effective? The benefit of designating two officers is that each can be more specialized, and potentially more effective in their respective areas. However, the risk associated with having two officers is that things that are not clearly just privacy or just security might fall through the cracks if the two do not coordinate well.

What is most efficient? For administrative purposes, it's hard to argue that having one designated officer isn't substantially easier than having two. There is so much overlap in the two areas of responsibility that if you can have one person be responsible for both, it may avoid a lot of duplication of effort. Combining the roles is more common in smaller organizations.

All that said, there's no legally incorrect answer here. Just like the debate over whether a CEO should also be the Chairman of the Board, there are good arguments on either side, and the answer often boils down to the size of the organization and administrative ease.

Can (and should) an organization have more than one Privacy Officer or Security Officer?  Some organizations are both a HIPAA "covered entity" (e.g., healthcare provider or sponsor of an employee health plan) as well as a "business associate" (e.g., service provider to a covered entity). Those organizations will need to decide whether the Privacy and Security Officer(s) they designate for themselves as a covered entity should be the same person(s) designated for purposes of the protected health information they acquire as a business associate.  Generally speaking, an organization's obligations as a covered entity are similar to its obligations as a business associate. With the exception of contractual obligations in business associate agreements, the basic legal obligations are almost identical. (The Security Rule obligations to protect ePHI are basically identical. The Privacy Rule obligations are very, very similar.)  

Generally, I don't think there is a compelling reason to have separate Privacy Officers (or Security Officers) for these two capacities in which an organization might be acting, and I don't believe that is a common practice.  I think it is most efficient to have one Privacy Officer and Security Officer who is responsible in both contexts, and who understands the subtle differences in those contexts.  Organizations that find themselves acting as both a covered entity and a business associate should be aware of the distinctions, however, and should have policies and procedures that reflect those distinctions.  Here is one practical example:  Most employees should be shielded from access to PHI that is held by a plan sponsor of an employee benefit plan.  However, within the same organization, far more employees might have a legitimate need to access the PHI of in the capacity as a business associate of other organizations. 

Once you've made this important decision, you can begin building a HIPAA compliance policy and procedures around the basic structure you've chosen. (Let me know if you'd like some help with that.) - Matt

Sunday, October 9, 2016

Is Your Customer Data Your Greatest Asset or Your Greatest Liability (or Both)?

Customer data can be a treasure trove for an organization.  Many organizations believe customer and prospect data to be their most valuable asset.  Unfortunately, some have discovered that, unless handled with care, it can also be their greatest liability.

Organizations of all kinds collect, store, analyze, use, and share consumer data for myriad reasons.  Consumer data may help an organization maintain contact with a customer or prospective customer.  Properly analyzed, it can often predict customer behavior, allowing an organization to tailor its communications and offerings.  It can reveal patterns that help increase revenue, minimize expenses, and ultimately drive profitability.  Data can be leveraged and monetized by sharing with affiliated and non-affiliated entities.  Given the immense value of consumer data, it is no surprise that some of the most valuable companies in North Carolina and the world are data analytics firms.

Over the past few years, however, it has become widely acknowledged that such valuable data can also be a liability of the greatest magnitude.  The costs of the largest data security breaches have made headlines.  But these sensational headlines sometimes create the misleading impression that only large organizations incur massive costs, and that the losses are solely attributable to hackers.

The Risks, by the Numbers
One of the best sources of information about risks associated with consumer data is NetDiligence's annual study of "cyber insurance" policy claims.  Although the information is limited to incidents for which the targets had insurance coverage, and is limited to covered losses, it is still an excellent source of data.  The most recent study, covering claims data from 2012 to 2015, showed the average insurance claim amount was $673,767, with average legal fees of $434,354.

Smaller Organizations Face Increasing Risks
In the NetDiligence study, organizations were categorized by size (revenue), which provides some interesting insights.  The smallest organizations represented the largest raw number of incidents, probably due to the fact that there are simply more small organizations than there are large ones.  While the three smallest categories of organizations accounted for a combined 71% of the reported incidents in 2015, they were responsible for only 38% of records exposed.  It was surprising, however, that, according to NetDiligence, some of the largest claims came from smaller organizations.  This may be a result of the smaller organizations being less aware of their exposure or having fewer resources to provide data protection and security awareness training for employees.  By contrast, mid- and large-revenue organizations accounted for only 17% of incidents, but were responsible for 60% of the consumer records exposed.  This seems intuitive, because larger organizations would be expected to have more consumer records, on average, than smaller organizations.

Risks Are Spread Across Industries
The NetDiligence study also reveals a good deal about the source of recent risks.  While risks in prior years were concentrated in certain industries, they are becoming less concentrated year by year.  According to the study, recent losses were more evenly dispersed among business sectors, with healthcare reporting the most at 21% and financial services coming in second at 17%.  In other words, the categories of affected data resulting in the highest losses, from all industries, were health information and financial data, but the majority of losses were incurred outside of these two historically most targeted industries.

Vendors: The Weak Link?
Vendors are a common source of privacy and data security risk.  Vendors include service providers and others with access to an organization's data or systems.  In 2015, 25% of claims were attributable to vendors.  Of those claims, approximately half were hacking incidents, with the other half largely accidental or intentional disclosures.  Another interesting observation is that the vendor events exposed significantly more consumer records than events that occurred at the organization itself, indicating that failures by vendors may tend to be more systemic than failures at the level of the primary organization.

Healthcare providers and other HIPAA-covered entities, financial institutions, and defense contractors have long been required to extract certain contractual agreements requiring security protection from their vendors.  Following the breach of a Target vendor resulting in a massive theft of Target's customer data, organizations of all kinds began imposing contractual privacy, security and, importantly, indemnity terms on vendors, and these terms are sometimes heavily negotiated.

Data Use Violations: A Bigger Risk Than Breach?
Data-related liability in the context of nefarious hackers breaching security systems from foreign lands dominate the headlines, but much less dramatic circumstances lead to large numbers of significant incidents every year.  An analysis of what triggered the losses that gave rise to cyber liability claims in 2015 reveals that targeted security breaches are not the only source of loss.
There were many reported causes of claims, and while the most expensive were malicious hacking attacks, the second greatest cause was the wrongful collection of data—in other words, data use (or "privacy") claims.  Data use violations involve the intentional collection, storage, use, or sharing of consumer information in a way that violates the law, a contract, or an individual's right. 

Organizations and individuals throughout the United States are collecting, using, and sharing data in ways that expose them to liability, often without realizing it.  One of the most frequent violations involves collecting consumer information without consent, followed closely by using consumer information for purposes that were not consented to at the time of collection.

An Ounce of Prevention
Perhaps nowhere else is the axiom "an ounce of prevention is worth a pound of cure" more appropriate than in the context of the modern explosion in the collection and use of customer data.  Preventing a data security- or privacy-related loss involves more than just purchasing defensive technology.  According to reports, simply adopting and implementing good policies and procedures for correctly collecting, storing, using, and sharing data would have prevented a large portion of the reported losses.  Data governance policies and precures should be carefully crafted and followed, and should cover the following areas:
  • Document retention and data destruction
  • Consumer consent practices and electronic signatures
  • Payment card information
  • Employee email and telephone monitoring
  • Website and application monitoring and advertising
  • Email marketing
  • Telephone and text message marketing
  • Fax marketing
  • International consumers and international data transfers
  • Password administration and limited access
  • Background checks and credit reports
  • Identity theft and "red flags"
  • Employee and consumer health information
  • Educational records
  • Sharing customer information with affiliates
  • Sharing customer information with non-affiliates
The policies should address the following:
  • Designated categories of data based on sensitivity (low risk, high risk, etc.) and business necessity (critical, valuable, low-value, etc.); and,
  • Established guidelines for collecting, using, storing, and sharing various categories of data.

Telling the World
Organizations frequently publish privacy policy statements to inform their customers and others about their privacy practices.  Financial institutions, healthcare providers, and website operators are all required by law to make such statements publicly available.  Many organizations, unfortunately, misunderstand the purpose of this document.  A privacy policy statement is not the same as an internal policy or procedure; it is a public-facing disclosure that should be simple and flexible.
Organizations are often their own worst enemies in misconstruing the purpose of privacy statements.  They frequently draft and distribute privacy policy statements that include lofty language and make promises the organizations are not required to make, only to later fail to fulfill those unnecessary promises, thereby creating unnecessary liability.  Practices that do not live up to the statements made in a privacy policy statement are the number one source of Federal Trade Commission enforcement actions.  

Not If, But When
It is natural for an organization, just like an individual, to hope that it is immune from risks that others face.  If, however, the federal government, the United States military, and major multinational corporations are susceptible to major privacy and data security incidents, your organization probably is as well.  Therefore, it is most reasonable to think of a data security or privacy incident not in terms of "if," but rather "when."

Breaches and intentional, but unauthorized, data disclosure events trigger reporting obligations to federal and state officials, customers, and sometimes the media, and often result in regulatory enforcement actions and litigation (including class action lawsuits).  There are, however, steps that an organization can take to prepare for such unwelcome events and that can help mitigate resulting losses.  Two of the most important steps an organization can take are:
  • Purchase cyber insurance; and,
  • Adopt a breach response plan.
Cyber insurance is a term that refers to a category of insurance policies that transfer, in return for the payment of a premium, some of the financial risk of a data security incident to an insurance company.  Cyber insurance policies are not standardized, and they vary dramatically in the scope of coverage.  For example, the direct loss of funds from a hacked bank account is almost never covered by a cyber insurance policy, but many potential liabilities and defense costs can be covered.  It can be helpful to have the assistance of a knowledgeable attorney when evaluating cyber insurance coverage options.

Having an incident response plan in place is always a good idea.  Once an incident has occurred, the required timeframes for reporting the incident and mitigating any resulting harm can be very short (sometimes less than a week).  Having a plan in place, and a designated team ready to implement the plan, can make a tremendous improvement in your organization's response and potentially limit losses associated with the incident.  Additionally, incident response assistance (such as forensic computer expertise, call centers, printing and mailing services, and public relations) can be vetted and prices negotiated in advance, with potentially massive savings.

Ready or Not, It's Time
Complying with privacy laws, mitigating risks, and preparing for the possibility of a loss may seem daunting.  Given the scope and magnitude of the risks, however, it is simply a necessity in today's environment.  The task is manageable with some professional guidance, and the peace of mind that preparation can bring is well worth the effort.

Matt Cordell is the leader of the Privacy and Information Security practice group at Ward and Smith, P.A., a full-service law firm with five offices and approximately 100 attorneys across North Carolina.  He is a Certified Information Privacy Professional (CIPP/US) and a member of the International Association of Privacy Professionals.  Matt is also the chair of the NC State Bar privacy and information security specialization exploratory committee. 

Matt Cordell has been frequently rated one of the best lawyers in North Carolina.  Data security lawyer in RTP.  Information security lawyer in Raleigh.  Best North Carolina business lawyer. 

Tuesday, July 5, 2016

Business Associates of HIPAA Covered Entities Beware!

If your organization is a business associate of a HIPAA covered entity (such as a health care provider or employee health benefit plan), you should know that the Department of Health and Human Services' Office of Civil Rights (OCR) is actively pursuing business associates over privacy and information security violations.

Business Associate Fined >$15,000 Per Patient

This past week, Catholic Health Care Services of the Archdiocese of Philadelphia (CHCS) agreed to settle with OCR after alleged violations of the HIPAA Security Rule that came to light after the loss of an iPhone containing protected health information (PHI) of 412 nursing home residents. The settlement requires a monetary payment of $650,000 and a corrective action plan. (For those who have not already done the math, the fine alone will cost CHCS more than $15,000 per patient!)

In announcing the settlement, OCR's Director Jocelyn Samuels emphasized the importance of a comprehensive program: “Business associates must implement the protections of the HIPAA Security Rule for the electronic protected health information they create, receive, maintain, or transmit from covered entities. This includes an enterprise-wide risk analysis and corresponding risk management plan, which are the cornerstones of the HIPAA Security Rule.”  In the case of CHCS, the iPhone was unencrypted and was not password protected. To make matters much worse, OCR learned that CHCS had no policies addressing the loss of mobile devices containing PHI, no security incident response plan, no risk analysis, and no risk management plan.

As part of the settlement, OCR will monitor CHCS for two years to ensure compliance. You can read the Resolution Agreement and Corrective Action Plan on the OCR website at:

Business Associate Audits
This announcement comes just months after the launch of the second phase of OCR's much-anticipated audit program for business associates. Rather than awaiting reports of violations, the OCR is actively auditing business associates. When announcing the audit program, OCR explained the process:
  • First, OCR will contact organizations by email to verify contact information and complete a pre-audit questionnaire.
  • Organizations selected will be subject to either a desk audit, an onsite audit, or both.
  • Organizations will have a about 10 business days to produce requested documents, so there will be insufficient time to create or update HIPAA privacy and security policies, security risk assessments, breach notification documentation, business associate agreements, and other HIPAA documentation after notification.
Business associates should not wait until an audit is initiated.  Now is the time to ensure that HIPAA programs are in place, complete, and up to date.  If this week's CHCS settlement is any indicator, the OCR will be seeking large fines when it uncovers violations.

Matt Cordell is a North Carolina lawyer with expertise in HIPAA and health care privacy and information security. 

Tuesday, June 28, 2016

BREXIT: Unchartered Territory for EU and UK Data Protection Standards

My law partner, Deana Labriola, has written a piece about the Brexit and its impact on the GDPR. 

BREXIT: Unchartered Territory for EU and UK Data Protection Standards

| Deana A. Labriola
So what changed on June 23, 2016? Maybe everything, and then again, maybe nothing at all.  The UK is leaving the EU.  While this decision will have far reaching implications for years to follow, it may be far less impactful for data protection laws, at least in the short term.

You can read the rest here:

Tuesday, June 14, 2016

Don't Be Tardy. Get Schooled on North Carolina's New Education Technology Law Now!

Photo of Education Tech Privacy North Carolina Data Security Lawyer Matt Cordell Best Lawyer Raleigh North Carolina Privacy Attorney RTP North Carolina

New NC Law Enhances Student Privacy Rights and Restricts Providers of Online Educational Resources

Education technology (or "EdTech") organizations will want to pay close attention to a new North Carolina statute that was signed into law a couple of days ago.  On Thursday, June 9, 2016, a new law titled "An Act to Protect Student Online Privacy" was enacted to further protect the privacy of K-12 students in North Carolina.  It becomes effective October 1st, so education technology companies have very little time to prepare before the upcoming school year begins.  They should review their data collection, storage, use and sharing policies and procedures in light of the new law, and adjust their practices if necessary.  In some cases, this may require changing or disabling the features and functions of websites or applications.

Who Is Affected?

The law is primarily aimed at the fast-growing Ed Tech sector.  Organizations may be affected whether or not they have a contract with a school, school board, or the State of North Carolina.  The statute applies to the operators of websites, online services, online applications, or mobile applications who know that the site, service, or application is used primarily for K-12 school purposes.  School boards are also affected, because they should ensure that their contracts with providers of online services require those providers to comply with the new law.
Like the existing student privacy statute, the law applies to public schools only.  Private schools, and their service providers, will remain unaffected.  (If private schools wish to protect the privacy of their students, they must do so by including contractual protections with their service providers.  I would strongly suggest that they do so.)

New Prohibitions

Online operators are prohibited from selling or renting a student's information without parental consent.  They are also generally prohibited from disclosing a student's covered information (defined below) except for six specific purposes.  The permissible disclosures include disclosures to a subcontractor who is contractually prohibited from further disclosure of the information and who agrees to implement reasonable security procedures.

Online operators may not engage in so-called "targeted advertising" (better known as "behavioral advertising") based on information received for "school purposes."  "Targeted advertising" means presenting an advertisement to a student where the advertisement is selected based on information obtained (or inferred over time) from that student's online behavior, usage of applications, or covered information.  Furthermore, they are prohibited from "amassing a profile" of a student except for school purposes.

New Requirements

In addition to proscribing new limitations, the statute imposes two new obligations on online operators.  All operators must "implement and maintain reasonable security procedures" and "protect covered information from unauthorized access, destruction, use, modification, or disclosure."  Operators are also required to delete a student's information at the request of the school board, or when the operator stops providing service to the school board, unless the student's parent consents to the record retention.

Broader Scope of Covered Information

Although the student privacy statute already contained a definition of the term "personally identifiable information," the new statutes creates a significantly more broad definition of the same term that is applicable only for purpose of online privacy protections.  It includes twenty nine (29) categories of information.

Interaction with Existing Law

You may recall that I wrote in mid-2014 about a then-new student privacy law in North Carolina.  You can read that summary here.  Titled "An Act to Ensure the Privacy and Security of Student Educational Records," the law prohibited schools from collecting certain categories of information, restricted the disclosure of personally identifiable student data, required school boards to give parents an annual summary of parental rights and opt-out opportunities, and directed the State Board of Education to make rules regarding privacy standards, audits, breach notification and data retention and destruction policies.  The 2016 law described in this article amends and enhances the 2014 statute.

It should be noted that the federal Children's Online Privacy Protection Act (better known as COPPA) already protects children's online privacy in the educational context as well as in all other contexts.  Any organization affected by North Carolina's new statute should already be in compliance with COPPA, but if it is not, there is no better time than now to become compliant.

Don't Get Sent to the Principal's Office!

Education technology companies and school boards have very little time to revise their policies and practices in order to comply with the new statute.  They should consult with their privacy counsel quickly so that they will not be "sent to the principal's office" when the summer break ends!

You can find more posts like this by Ward and Smith, P.A. attorney and Certified Information Privacy Professional (CIPP/US) Matt Cordell at the North Carolina Privacy and Information Security Law Blog:  Matt Cordell practices in the areas of privacy law, information security law, data use law and related consumer protection laws, and has offices in Raleigh, New Bern, Greenville, Wilmington and Asheville.  This article is not intended to give, and should not be relied upon for, legal advice in any particular circumstance or fact situation. No action should be taken in reliance upon the information contained in this article without obtaining the advice of an attorney.

Monday, May 30, 2016

European Data Protection Supervisor Rejects Proposed U.S. Privacy Shield

Today, the European Data Protection Supervisor (EDPS) delivered a crushing blow to the proposed Privacy Shield, sending U.S. and European negotiators back to the drawing board.

Readers of this blog know about the collapse of the EU/US data privacy Safe Harbor framework (which had been in place since 2000) and the efforts to negotiate a trans-Atlantic resolution (see my prior posts here, here and here).  The EU/US Safe Harbor was struck down by the EU Court of Justice last year, and officials have been scrambling to replace it.  This spring, the U.S. Department of Commerce released a proposal (the "Privacy Shield") designed to satisfy European officials that U.S. organizations could be trusted with information about Europeans.  I have already described that proposal in relative detail, here.

The European Data Protection Supervisor (EDPS), appointed in 2014, is an independent institution of the EU, responsible European law "for ensuring that the fundamental rights and freedoms of natural persons, and in particular their right to privacy, are respected." Under Article 28(2) of Regulation 45/2001, the European Commission is required, "when adopting a legislative Proposal relating to the protection of individuals' rights and freedoms with regard to the processing of personal data", to consult the EDPS. Since the submission of the proposed Privacy Shield to the EDPS, officials on both sides of the Atlantic have been holding their respective breaths in anticipation of this Opinion.

Earlier today, EDPS Giovanni Buttarelli declared that the Privacy Shield was "not robust enough." Although "a step in the right direction" it was deemed inadequate. Specific criticisms involve safeguards, judicial redress, and routine access by U.S. governments.  In Opinion 4/2016, titled "Opinion on the EU-U.S. Privacy Shield draft adequacy decision", the EDPS outlined three main recommendations (integrating data protection principles, limiting exceptions, which are referred to in EU law as "derogations", and improving redress and oversight mechanisms) as well as five secondary recommendations. You can read the full text of the EDPS Opinion for yourself here

The sense of urgency is real. The General Data Protection Regulation (technically regulation EU 2016/679, but known simply as the "GDPR") becomes effective in May 2018, and the Privacy Shield was intended to take effect before the GDPR in order to satisfy its requirements in addition to the existing EU legal framework.

Stay tuned, as there is certainly much more to come.

Sunday, March 6, 2016

Has The U.S. Found A "Privacy Shield" That The E.U. Can Live With?

Regular readers know I've been writing recently (here and here) about the collapse of the EU/US data privacy Safe Harbor framework and the efforts to negotiate a trans-Atlantic resolution. This is a major issue for U.S. organizations that do business in Europe or with Europeans. 

On Monday (February 29), the U.S. Department of Commerce released a proposal (the "Privacy Shield") designed to "provide[] a set of robust and enforceable protections for the personal data of EU individuals." The Privacy Shield release is *just* 132 pages, which you can read here

To rely upon the Privacy Shield framework, a U.S. based organization would be required to self-certify to the Department of Commerce and publicly commit to comply with the Privacy Shield's requirements. While joining the Privacy Shield framework will be voluntary, once an organization undertakes to comply with the Framework’s requirements, the commitment will become enforceable under U.S. law. Key elements are outlined in a "fact sheet" here, including the following:
  • The Privacy Shield contains seven distinct categories of "principles" including notice, choice, accountability for onward transfer, purpose limitation, recourse, enforcement and liability among others. (These should sound familiar to those who previously complied with the Data Protection Directive.)
  • U.S. entities will continue to self-certify.
  • U.S. entities will adopt a privacy policy statement which will become legally enforceable.
  • When a U.S. entity's privacy policy is available online, it must include a link to the Department of Commerce’s Privacy Shield website and a link to the website or complaint submission form to investigate individual complaints.
  • A U.S. entity must inform individuals of their rights to access their personal data, the requirement to disclose personal information in response to lawful request by public authorities, which enforcement authority has jurisdiction over the organization’s compliance , and the organization’s liability in cases of onward transfer of data to third parties.
  • Privacy Shield participants must limit personal information to the information relevant for the purposes of processing. Additional personal information may not be collected and retained.
  • To transfer personal information to a third party acting as a data controller, a Privacy Shield participant must:
    • Comply with the Notice and Choice Principles.
    • Enter into a contract with the third-party controller that provides that such data may only be processed for limited and specified purposes consistent with the consent provided by the individual and that the recipient will provide the same level of protection as the Principles.
  • To transfer personal data to a third party acting as an agent, a Privacy Shield participant must:
    • Transfer such data only for limited and specified purposes;
    • Ascertain that the agent is obligated to provide at least the same level of privacy protection as is required by the Principles;
    • Take reasonable and appropriate steps to ensure that the agent effectively processes the personal information transferred in a manner consistent with the organization’s obligations under the Principles;
    • Upon notice, take reasonable and appropriate steps to stop and remediate unauthorized processing; and
    • Provide a summary or a representative copy of the relevant privacy provisions of its contract with that agent to the Department upon request.
  • Privacy Shield participants must respond promptly to inquiries and requests by the Department of Commerce for information relating to the Privacy Shield Framework.
  • Privacy Shield participants must make public any relevant Privacy Shield-related sections of any compliance or assessment report submitted to the FTC if the organization becomes subject to an FTC or court order based on non-compliance.
  • If an organization leaves the Privacy Shield Framework, it must annually certify its commitment to apply the Principles to information received under the Privacy Shield Framework if it chooses to keep such data or provide “adequate” protection for the information by another authorized means.
There's still a big question mark: A genuine uncertainty exists as to whether the proposal will be approved (i.e., deemed "adequate") in Brussels.  If the EU determines that the Privacy Shield framework is adequate, the U.S. Department of Commerce will begin accepting certifications from U.S. organizations promptly.