Search Results
95 results found with an empty search
- There Is Only One Opportunity for Initial Design of the First Data Protection Agency in the United States
This blog reflects the views of Marty Abrams The style, substance and leadership of data protection agencies make a difference. Every agency understands its charge to protect individual rights. In Europe, it is to protect autonomy, seclusion and fair processing. In California, it is similar but expressed in terms of informational privacy interests and autonomy privacy interests. [1] An increasingly critical task for these agencies is parsing the inter involvement with other personal rights. For example, the nature and substance of the Singapore PDPC and the French CNIL, both of which are recognized as excellent agencies, defines how they confront the balancing of rights and interests. Since this parsing explains the rules for innovation in an age of observation driven analytics, it should be front of mind for the architects of any new agency. The fifth largest economy in the world, California, will soon be creating a brand-new data protection agency. This U.S. state is larger than all but four EU member states. California has more tech headquarters than any political entity other than China. Voters enacted the California Privacy Rights Act of 2020 (CPRA), and the voters, in enacting the CPRA, also have voted in favor of a dedicated California Privacy Protection Agency. This agency will have broad powers to educate the public, formulate regulations, enforce the law, advise the legislature, and participate in international venues. The agency will have $5 million to work with in its first year, and $10 million in subsequent years. That amount is not large (the Irish Commissioner’s budget is $21.5 million); however, there is a possibility that the agency will be able to obtain the proceeds from enforcement actions. Everyone should care about how this agency is structured and led because its actions will impact both the rights of consumers and the responsibilities of businesses and because it has the goals of strengthening consumer privacy and giving attention to the impact on business and innovation. Individual rights to privacy are very important in a digital age but are not absolute. Section 3 of the CPRA and Recital 4 of the GDPR make this clear. These rights must be considered against individuals’ other substantive issues such as health, education, employment, and economic wellbeing. Furthermore, it is increasingly understood that privacy needs to be considered in light of the interests of other individuals. The global pandemic makes this point even clearer when one’s individual interest in autonomy may conflict with another’s interest in avoiding COVID-19. The charge to protect privacy is crystal clear in the CPRA. However, the responsibility to balance the full range of issues is inferred by the concept of proportionality. This balancing of interests creates the hardest tests for regulatory agencies. Such challenges do not mean ignoring individuals’ collective rights to autonomy and transparency. They mean reasoned approaches that accommodate data uses for knowledge creation where risks, for example to fair processing, are minimal should be employed. Richard Thomas, when he was the UK Information Commissioner, stressed that agencies must be selective to be effective. This approach means agencies must prioritize individual rights based on how many persons are impacted and based on the impact of the harm. Some of the risks relate to procedural rights that if not enforced make regulations ineffective. For example, ineffective transparency makes it very difficult for many persons to exercise rights to object to processing, gain access to their data, and understand purposes for which data will be used. However, there are concerns about procedural rights that are on the margin. There are times when focusing on process risk distracts from the risks of insufficient data to protect people from disparate treatment and real harms to health, financial status, education and even freedom. Selective to be effective means a focus on these impacts. So, with all the activities that are open to a privacy agency, which activities should they select? Based on those activities, for which talents should they recruit? The Information Accountability Foundation (IAF) believes the time to consider those questions are when an agency is being formed, not after the agency already has allocated all of its resources. The IAF, as a research and education agency has worked with agencies worldwide. The IAF team has seen the agency evolution from data base registrars to powerful enforcement agencies. It would be nice to see an agency structured to deal with the challenges of the future and not just the present or the past. There is no better place to start than a discussion on the new California agency. This discussion will begin but not end with the IAF policy video call on 17 December. If you would like to attend that call, please let Stephanie know at spate@informationaccountability.com . That call will feature very diverse views. It is the IAF’s intent to continue the discussion in 2021 as the agency is in a formative stage. The IAF will hold a multi-stakeholder dialog in May 2021. Please let us know what you think by contacting Marty at mabrams@informationaccountability.org . [1] De la Torre, Lydia F., “California’s Constitutional right to privacy,” medium.com , October 15, 2020
- Multilateral Proportionality Requires Definition
The stresses of 2020 have challenged the data protection community’s understanding of data protection and how to apply its practices. COVID-19 has called for data driven solutions to keep humanity safe by finding new treatments for this very contagious disease. This threat does not lessen the fact that data about health both is sensitive and if misused could cause real harm. In trying to find a path toward safety for individuals and society, data protection authorities have relied upon the principle of proportionality. Proportionality, as defined for criminal and administrative law purposes, connotes a bilateral balancing that most often is between the government’s interests and powers and the individual’s fundamental rights. [1] When data protection and privacy authorities discuss proportionality as it relates to the private sector, they typically focus on data minimization, i.e., the company should use a proportionate volume of data to limit the risk to a particular individual. This focus is bilateral; it relates to two sides. This works for simple transactions like completing a transaction. Often, however, the processing in question is not bilateral. Instead, it is multilateral, e.g., a complex process to generate an insight that impacts many individuals, both positively and negatively, beyond the controller and the data subject. Therefore, a bilateral application of proportionality is not useful when the purpose of the processing is insights that impact many individuals and organizations. It is multilateral; it relates to three or more parties and not just the company processing the data for the individual to whom the data pertain. Furthermore, bilateral proportionality, in administrative law, limits the absolute power of the government to restrict life and liberty. When companies are balancing issues, they do not have that absolute power of the government but rather must be cognizant of many stakeholder impacts, i.e., multilateral proportionality. COVID-19 illustrates this bilateral/multilateral proportionality dichotomy. COVID-19 is taking place in an era where data, fairly processed, is seen as a salvation and where data processed efficiently may be seen as an agent of evil. The data may be the same in both use cases, and the technology used also may be the same. The difference is due to the choices made by those that process the data. In many ways, the dilemma is well illustrated in Recital 4 of the GDPR: “The processing of personal data should be designed to serve mankind . The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality. This Regulation respects all fundamental rights and observes the freedoms and principles recognized in the Charter as enshrined in the treaties, in particular the respect for private and family life, home and communications, the protection of personal data, freedom of thought, conscience and religion, freedom of expression and information, freedom to conduct a business, the right to effective remedy and to a fair trial, and cultural, religious and linguistic diversity .” (emphasis added) Recital 4 suggests that the purpose of data protection ultimately is to make sure data serves mankind. The definition of mankind is “human beings collectively.” Mankind includes individuals of all types, with a mixture of traditions, skills, frailties, and linguistics. The concepts – “designed to serve mankind” and “diversity” – are emphasized in this blog for a reason. They represent the plural nature of the data world and demonstrate that the singular rights of individuals in isolation are not always the objective. Data protection and privacy law, by its very nature, establishes the process to protect specific fundamental rights. The nature of data protection and privacy law requires a variable approach. However, the wording in Recital 4 is contradictory. Some of the words in Recital 4 downplay the pluralism that other words suggest. The balancer word is proportionality. Balancing of interests creates winners and losers. Proportionality has been defined in both mathematical and legal policy terms. In mathematics, it means two elements remaining in ratio to each other with a constant. It speaks to a bilateral balance between those two elements. In criminal law, it speaks to the punishment fitting the crime. In administrative law, it speaks to the government’s requirements not overreaching in relationship to an individual. All are bilateral processes. Bilateralism in policy suggests a fulcrum. How does one party’s gain affect the other party’s status? Accountability, the basic building block of modern data protection, has been caught up in bilateralism too. Accountability requires organizations to process data in a responsible and answerable way. Both are an obligation for the organization, not one or the other. The first half of that dual obligation, being responsible, speaks to doing what is right. The second, answerability, speaks to the organization being able to demonstrate that what it says is right is credible. Answerable is dependent on responsible. For some, responsible means processed for fair purposes in a fair manner. For others, answerable means documenting that data were processed according to a set of procedural and mostly regulatory rules. Compliance with procedural rules is fairly easy to prove. Demonstrating something is fair is more difficult to do and much more difficult to judge. The weakness in bilateralism has been accelerating since analytic processes have been transformed by big data over the last fifteen years. Advanced analytics gave mankind the ability to reach new insights by looking at data correlations absent preconceived notions. These insights can be both beneficial and detrimental. The question is beneficial and detrimental to whom. For example, insights have resulted in new approaches to tackling cancer and the ability to isolate the vulnerable who might be exploited. New treatments for cancer most likely are deemed fair; exploiting the vulnerable most likely is not fair. If fair is the test, then fair to whom and fair for what must be answered. What is fair for numerous parties, might not be fair to others. For example, is it fair for cancer patients to preclude data pertaining to their biology being used with the data from others to improve cancer treatments in the future? Artificial intelligence, the next stage of advanced analytics, is forcing many fields to search for means to provide context to fair. One of the best definitions of fairness comes from the High-Level Expert Group on AI, European Commission, as part of their recommendations on “Ethics Guidelines for Trustworthy AI.” It defines a Principle of Fairness as: “The development, deployment and use of AI systems must be fair. While we acknowledge that there are many different interpretations of fairness, we believe that fairness has both a substantive and a procedural dimension. The substantive dimension implies a commitment to ensuring equal and just distribution of both benefits and costs and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation. If unfair biases can be avoided, AI systems could even increase societal fairness. Equal opportunity in terms of access to education, goods, services and technology should also be fostered. Moreover, the use of AI systems should never lead to people being deceived or unjustifiably impaired in their freedom of choice. Additionally, fairness implies that AI practitioners should respect the principle of proportionality between means and ends and consider carefully how to balance competing interests and objectives. The procedural dimension of fairness entails the ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them. In order to do so, the entity accountable for the decision must be identifiable, and the decision-making processes should be explicable.” The fairness definition from the High-Level Experts Group is helpful. It suggests that proportionality includes both means and ends and the balancing of interests for numerous parties in society. Society is plural because it means an aggregate of individuals living together. Therefore, according to the Principle of Fairness, proportionality means the fundamental rights (plural) for the many individuals (plural) that comprise a society (plural). The COVID-19 discussion earlier demonstrates this definition. Thousands of victims put tens of thousands of other individuals at risk of an extremely contagious disease. The risk from this disease goes beyond health to issues of economic survival and national security. Data that pertains to mankind determines what is fair. Data protection authorities, reaching for bilateral balance, have required data holders to think in terms of proportionality. But as described in criminal and administrative law, including evaluating the individual rights versus government rights, proportionality is bilateral in nature. Yet, the COVID-19 analysis is not bilateral. The interests of multiple stakeholders are at issue. Proportionality as it relates to COVID-19 is multilateral. Often multilateral, rather than bilateral, proportionality should be applied in the private sector context. This blog suggests an additional definition for proportionality is necessary. A definition is needed that goes beyond a bilateral fulcrum balancing two factors to a multilateral gyroscope balancing many factors. A gyroscope balances left and right, up and down, and side to the side. This conclusion allows movement beyond the current limits of the proportionality principle in order to consider the numerous interests (plural) of the numerous parties (plural) that make up society (plural). So, the guiding principle in a data driven world of exploding insights, both good and bad, should be that a definition for proportionality that requires assessments that are multi-factor and multi-stakeholder and that demonstrate how a fairness determination was arrived at is necessary. Such a guiding principle also might, as the High-Level Experts Group noted, suggest part of the solution is “the entity accountable for the decision must be identifiable, and the decision-making processes should be explicable.” This guidance gets organizations closer to demonstrable fairness. Future IAF blogs and work will explore how multilateral proportionality informs demonstratable accountability and IAF model legislation. [1] Kolfschooten and Ruijter’s law review article “Covid-19 and Privacy in the European Union” discusses proportionality as it relates to government power.
- How to Use AI ethically and responsibly? An Ohio State webinar series will provide some answers
Written by Dennis Hirsch, Faculty Director, Ohio State Program on Data and Governance To succeed in a data-driven economy, companies must innovate with data while maintaining the trust of their customers. This poses a dilemma since data analytics and AI can invade privacy or produce unfairness, and so undermine trust. Business data ethics, a new field of corporate management, pursues responsible data practices that will enable businesses to be both innovative and ethical. Many companies recognize that business data ethics is important, particularly in the age of advanced analytics AI. But they lack reliable and detailed information about how best to achieve it. Starting October 2, the Program on Data and Governance at the Ohio State University will be holding a five-part webinar series on Business Data Ethics that will provide some answers. To learn more and to register visit the series webpage . The webinar series will feature an outstanding slate of panelists representing diverse industries (pharma/tech/consulting), federal agencies, think tanks, and research units at The Ohio State University, including the IAF’s own Peter Cullen. At the October 2 kick-off event, US Senator Chris Coons will deliver a Keynote address, and an Ohio State research team that has spent the past two years studying corporate data ethics management will share its findings for the first time. Subsequent panels will examine the ethics, management processes, and technologies that companies use to identify, and address, ethical risks associated with their use of AI, and will provide a look at the future of regulation in this area. Businesses interested in data ethics management best practices, and anyone seeking to build a more ethical future for the data economy, are encouraged to attend.
- Risk Based Should Be More Than A Cliché
The IAF will hold a policy call October 14 on “What is Risk?” There are different types of privacy risks, and policymakers/regulators and organizations need to understand the different types. Policymakers/regulators need to understand them in order to draft and enforce this generation privacy law, and organizations using data need to understand these types of risk in order to maximize their use of data. Since the EU General Data Protection Regulation (GDPR) and most recent privacy laws are risk-based, what is meant by risk? To begin with, it is risk to people, not risk to the organizations that process the data. Organisations always have been mindful of their own risks. The risk to people from the processing of data comes in two forms: The first form is procedural risk, the risk that individual rights related to their data will not be respected. Those rights include the right to know, have access to data, correct data, and object to its use. It also includes the right to complain and have redress. The second form is impact based on the outcomes of processing. Impact risk includes discrimination based on prohibited criteria and inappropriate loss of reputation. It also includes reticence risk and the costs to people for not processing data, for knowledge not gained, and for the impact of a knowledge vacuum on people. It may also include the failure to provide a benefit such as access to credit, health or employment. Fifty years ago, policymakers thought the best way to avoid impact risk was through individual control, a matter of procedural rights. Many procedural rights were based on reliance on individual control backed by independent regulators and the ability for redress. As processing became more complex, more of the risk management process went from an individual burden to an organizational accountability obligation. Requirements for privacy by design and default, data protection impact assessments and responsible data protection officers were part of that migration of risk management from the individual to the organization. It is not an accident that the framing of the GDPR as risk-based coincided with accountability becoming an explicit requirement of the law. So, flash forward to 2020. What the Schrems II case has shown us is that thousands of Euros, Dollars, Yen and other currencies are being spent by thousands of businesses to create “supplemental measures” and “additional safeguards” with limited evidence that those measures will actually reduce the impact of data transfers on individuals. Instead, these actions will reduce the likelihood that an organization will be fined by a data protection authority if its data transfers are challenged. This result shifts the calculus from being about risks to people to mitigation of risk to the organizations themselves. Not only does this shift not reduce impact risk, it does not reduce the individual’s procedural risk either. As is only too well known, this result is due to the decision of the European Court of Justice that found Privacy Shield is not equivalent and Standard Contractual Clauses might be insufficient because: The U.S. government’s bulk collection of data is disproportionate and a violation of the GDPR, and The inability of individuals to complain to an independent tribunal means they cannot exercise their rights under Article 47 of the Charter of Fundamental Rights of the European Union to an effective remedy and to a fair trial. Just to be clear, this blog is not intended as a critique of Schrems II but rather a reassessment of what risk-based means. When the facts in Schrems II are looked at from a risk perspective, it becomes clear that the likelihood that most business transfers would ever be interesting to national security agencies is very slim and easily established. Most of those 5000 businesses that were part of Privacy Shield do not process data that are interesting to national security agencies. Since the likelihood is slim, the need for a complaint process with an independent tribunal is also slim. Is there a possibility of data being requested by a national security agency? Absolutely. But risk requires an analysis of likelihood and impact and a balancing of other risks. Not being able to transfer data outside the EU raises massive issues, for many different stakeholders, and that inability has an impact on people. This result creates a disproportionate risk for other fundamental rights and interests. [1] Increasingly, the possibility that an individual right will not be exercisable has been raised by data protection agencies to reduce the flexibility built into data protection law. There is evidence that legitimate interest as a legal basis for complex processing is rejected because the complexity makes it hard for individuals to understand well enough to object to processing. Is there a possibility that people won’t understand? Absolutely. Does this risk mean there is a default to giving disproportionate emphasis to procedural risk? Absolutely not. This is why a risk assessment requires looking at the full range of interests across the full range of stakeholders. These issues will only become more complex. For example, when insulin reservoirs are implanted in people, and data flows are necessary not only to monitor the patient but also to support research, are regulators going to require the data only be used if the entire information ecosystem related to health is understood by patients? Twenty years ago, risk related to data use was discussed purely as a risk to business. Professionals discussed regulatory, reputation, compounding and reticence risk. The community has made great progress over the past twenty years. However, there has always been some friction between risk being seen as an inability for individuals to exercise rights and risk being associated with outcomes from processing. Procedural risk, the risk that individuals might not be able to exercise their rights, is important. However, the negative outcomes that come from the decision to process or not process data, impact risk, also is of great importance, arguably more so. There are times when procedural risk should prevail but not all the time. Again, this is why data protection law increasingly requires risk assessments that are proportional to the complexity of the processing and the issues raised. The IAF is seeing organizations go in two different directions. Organizations that embrace that they are data driven understand that sustainability requires them to engage in the processes that enhance stakeholder trust, such as enhancing their assessment processes which balance the risk equation for people. Other organizations focus on the regulatory risk to the organization and not on the risk to people, and this organization risk focus drives data protection laws away from their core mission of data should serve people. Join the IAF at its Policy Call on October 14 where we will explore this issue. [1] This comment does not mean that the IAF does not think reform is necessary when data is gathered by the government from the private sector. Back in 2014 the IAF issued a paper suggesting accountability principles for government use of private sector data. In 2016 I organized a plenary session at the International Conference of Data Protection and Privacy Commissioners on this specific issue.
- Canada is the Right Place to Explore Next Generation Privacy Legislation
Privacy law is less settled today than it was when the European Union General Data Protection Regulation (GDPR) went into effect two years ago. While the GDPR has sparked new privacy legislation in other geographies, implementation of the GDPR as a truly risk-based regime is still in question. Regulators are having difficulties overseeing legal basis other than consent, and to the staff at the IAF, Schrems II raises questions about the meaning of risk based. Further, in addition to a risk-based approach challenges, it is not clear there is a proficiency for applying a full range of rights and interests in balancing requirements let alone addressing the needs of society. The California Consumer Privacy Act (CCPA) hadn’t even been implemented fully when an initiative was placed on the Fall ballot to amend it. Brazil’s new privacy law goes into effect without an effective oversight process in place. Privacy is complex, yet the digital age continues to move forward. Can legislatures even create the guardrails for a digital society that is not just innovative but also respectful of the full range of human interests both to an individual and groups of individuals? Forward looking privacy legislation must create guardrails so that data pertaining to individuals is used in a manner that facilitates the growth of the digital marketplace in a fair and just manner. Such legislation must c reate a legal framework where computers and communication technologies serve the full range of needs of individuals, singly, as a group, and societally, while respecting the fundamental interests individuals have in legitimate spaces where they are free of surveillance, where they can choose to be part of a community, and where data pertaining to them is processed in a fair and just manner. Those three rights against the full range of interests to multiple stakeholders could comprise a balanced score card for next generation privacy laws. So, in an environment of questionable success, hope now shifts to Canada, a country where there are three jurisdictions taking steps to revise privacy law. Philosophically, Canada lies between the protection of all fundamental rights and freedom viewpoint taken in Europe and the free expression trumps fair processing position of the United States. Privacy is a Canadian fundamental right that has been practically implemented by OECD based legislation. Privacy in Canada is more understood by Canadians than defined. Most fundamental rights are fairly straight forward. Not so privacy. In fact, scholars have a hard time capturing the essence of privacy in definitions. So, rather than define the right, it is often simpler to define the interests that the right encompasses. There are three interests: The first is the individual’s interest in seclusion. All of us need a space where we are free of observation or intrusion into our private lives. This interest in seclusion rests on privacy within a household and the papers and the records associated with that household. In many ways, our interest in seclusion has been eroded by the observational nature of modern society, where one may create a record of behavior without a legacy paper record. The second is the individual’s interest in defining his or herself and not being defined by the digital tracks we leave behind. This is reflected in actions related to the individual’s autonomy, or ability to control the data that pertains to the reputation of the individual. The third is the individual’s interest in fair processing. This interest relates to the individual’s interest in fair treatment, absent inappropriate discrimination, with decisions based on accurate data. As data has become fundamental to the way processes and machines work (e.g. internet-of-things), more of the work of privacy agencies and privacy professionals has been dedicated to fair processing. How technology interfaces with those three interests is very different today than 30 years ago, when Quebec, the first of Canada’s private sector privacy laws was enacted. That law, and others in Canada, predate the risks and benefits to individuals that have come with the Internet, smart phones, connected cars, advanced analysts and an internet of everything. When the enactment of privacy legislation is considered, how privacy intersects with other fundamental rights and interests needs to be considered. This intersection also should include other stakeholders (e.g. groups of individuals and/or society) as well as an express consideration of the benefits to specific rights fair processing might create. Privacy, while fundamental, is not an absolute right. Every individual has other rights and interests that are just as important. Those interests include better health and education, the right to be employed and create a business. They also include the right to information and to make decisions based on data validated facts. Sometimes those interests are best served when aggregated with the interests of other individuals into societal interests. For example, while an individual has an interest in how health records might impact reputation and standing, the individual also has an interest in that data being used in a protected manner for healthcare research. Canadian law, more than most, has recognized that group interest in better healthcare through research. Quality privacy law links one or more of those privacy interests and allows for proportionate balancing among all rights and freedoms. The three efforts currently underway in Canada to amend or enact privacy legislation are: First, the federal government signaled its intent to update the private sector privacy law, Personal Information and Electronic Documents Act (PIPEDA), as part of a new digital agenda for Canada. The digital agenda’s overarching goal is consistent with the score card above. The devil will be in the details on how legislation is drafted that will encourage economic growth through fair digital innovation and societal interests, while staying loyal to Canadians’ attraction to privacy as a fundamental right of individuals. Second, the Quebec government has tabled a discussion draft of privacy legislation that would update the current provincial law that is 30 years old. Canada is a federal state that enables private sector privacy at the provincial level where the provincial laws are similar to PIPEDA. Quebec (along with British Columbia and Alberta) is one of those provinces. However, the Quebec proposed law doesn’t match well with the IAF scorecard. It is not flexible and depends much too much on consent as a means of making processing lawful, even as consent has become less and less viable as a governance concept. It also doesn’t build on the accountability framework already established in Canada. Furthermore, the draft legislation prohibits data transfers to jurisdictions without equivalent protection. Adequacy findings are always tricky, maybe more so when the other jurisdiction is a province in the same federal system. Equivalency as adequacy has not been fully effective anywhere. The IAF will post the comments that it will be filing with the Quebec government. Third, Ontario, Canada’s largest province by population, doesn’t currently have a private sector privacy law and by extension the private sector is overseen by the Federal Privacy Commissioner. That oversight would change if the Ontario government were to enact a new privacy law that is deemed similar to PIPDEA. The intent to do so was signaled by a discussion paper recently released by the government entitled “Ontario Private Sector Privacy Reform.” The discussion paper includes a link for comments and a survey. The discussion paper includes key themes that would be explored as part of the legislative process. Ontario has an Information Commissioner that is responsible for public sector privacy and provincial institutions such as healthcare. The new privacy statute would provide oversight and grant regulatory authority for the private sector to the Information Commissioner. The discussion paper contains very broad themes that need to be developed into specifics, and there seems to be a process to do so. The IAF has conducted three projects in Canada, all involving a multi-stakeholder discussion. The IAF, a global organization, has been asked why it puts so much time and effort into Canada? The answer is because the Canadian laboratory works. The Canadian government, regulators and other stakeholders have been debating modernization for the last half decade, and these debates have been dynamic. Whether it has been the Privacy Commissioner’s consultation on consent or the Ministry of Innovation, Science and Economic Development (ISED) support of solutions to People Beneficial Data Processing, and many discussions on its digital agenda, the discussions have been open and frank. The discussions in the three projects the IAF has conducted in Canada have been fruitful, in part because of the willingness of all parties to listen to each other and learn. So, the IAF, in its research and education mission, will continue to participate in the Canadian debate because the Canadian laboratory works, and the Canadian debate has the most promise of creating replicable models for other jurisdictions.
- Muddling as a Data Protection Strategy
Schrems II has led to muddling as the latest data protection strategy. The muddling is over transferring data from Europe to anywhere else in the world. The ultimate problem is that national security agencies, no matter where they are located, can’t and don’t want to be part of binding agreements to assure Europeans have similar protection when their data are processed outside the European Union. So, companies will do the right thing by muddling. They will document that they have not been the subject of bulk data requests. They will document that they have a procedure to deal with such requests if they are received in the future. Depending on the risk of receiving such requests, they will put in place supplemental measures (e.g. store data in Europe, encrypt data when it is in transit, and/or pseudonymize the data when analytics are run on them). They will have privacy policies that disclose that transfers are covered by standard contractual clauses. They will hope it is unlikely that their data transfers will be subject to a regulatory complaint or a suit filed directly with the courts by an NGO. It is possible the end may be in sight for some of this muddling. The EU Commission and the U.S. Department of Commerce have announced negotiations for a third agreement following Safe Harbor and Privacy Shield, but such an agreement does not cover transfers to the Philippines or Canada. Such an agreement does not address adequacy discussions with the handful of countries that already have an adequacy finding. Furthermore, how can U.S. negotiators plead the case for dynamic data flows when the U.S. bans TikTok and WeChat? While companies muddle along by doing their best to interpret the data protection tea leaves, limited resources are diverted away from initiatives that could actually improve privacy and protect individuals. In addition, few outside of academia understand the nuanced philosophical debates about constitutions and rights, cementing the unflattering impression that privacy is squishy and silly, not to be taken seriously by executives who operate in the here and now. The best case for muddle as a strategy is it buys time, and during that time, policymakers will negotiate an agreement that buys additional time. Articles 79 and 80 of the GDPR raise questions about the wisdom of muddle as a strategy. Individuals may go to European courts directly or collectively through NGOs. For example, on August 17, NYOB reported that it had filed 101 complaints related to transfers to the U.S. The IAF is holding a policy call September 14 to discuss collective action in Europe. An agreement between just the U.S. and Europe isn’t the answer. A U.S.-EU deal doesn’t solve the problem because data flow to many different countries, and national security agencies in those countries will use the data. In the end, companies can’t solve this problem. Yes, they can lobby governments, but it is those governments that must solve the problem, and they have the incentive to do so. Data drive national economies as the digital strategies for the European Union, Canada, Singapore, and almost every country that has an industrial economy show. The IAF’s mission is to encourage data’s use in a way that data serve people as those digital strategies move forward One of the IAF’s first projects was on government use of private sector data and maintaining an accountability chain. As a member of the advisory panel for the International Conference of Date Protection and Privacy Commissioners (now Global Privacy Assembly) in Morocco, the IAF facilitated a plenary session on this topic. As part of its mission, the IAF will continue agitating for a global agreement. It is only through talks illuminating the proportionality between individuals’ interests in seclusion and autonomy on one hand and their desire to live in secure physical environments in all countries on the other hand that data protection strategies for corporate data transfers go beyond muddle to real probability.
- Data Protection Should Be Proportional to Other Individual Rights
COVID-19 has been a grim reminder of how unfair the world is. People of color and people who are economically challenged have had more than their fair share of the disease and its consequences. One way to limit the damage to human life has been the use of data associated with people for isolating COVID-19 hot spots. Data protection authorities globally have issued guidance that declare that tracking applications must be proportionate, considering the rights to privacy and data protection. However, it is not clear they have asked the same question in reverse: Is the right to privacy proportional to the right to life? Or the right to health? Or the right to conduct a business? Or in other contexts the rights to liberty and security? The IAF has been thinking about proportionality as a process that is specific to balancing the full range of fundamental rights. This viewpoint seems to be consistent with many parts of the GDPR (e.g., the balancing tests required in Legitimate Interests or Data Protection Impact Assessments). This approach also is consistent with the view that while it is a fundamental right, data protection is not an absolute right. Today, proportionality most often seems to be referenced within a fundamental right (e.g. “protection of personal data”) and less often as a construct that should be applied to a fuller range of fundamental rights. In short, if the GDPR requires an assessment of the full range of fundamental rights, then the proportional balance should be against this full set of rights. Picking and choosing applicable rights is inconsistent with both the GDPR and the EU Charter of Fundamental Rights. While other parts of world may not have the explicit connection as in Europe (EU Charter of Fundamental Rights and the GDPR), the implicit connection exists between privacy and other rights and by extension should be balanced in a proportional way. This disconnect is shown when law professors Hannah van Kolfschooten and Annick de Ruijter explore when public interest might abridge individual rights in a proportional manner in their paper, “ COVID-19 and Privacy in the European Union: A Legal Perspective on Contact Tracing .” Since the court cases related to health screening are limited, the paper looks to the case law related to government interest in national security. The paper speaks about the issue as public interest, national security and public health versus individual rights to privacy and data protection. In reality, COVID-19 impacts individuals’ rights to life and health and ability to be freely employed. Those are individual rights, not just collective societal rights. The recitals to the GDPR frame the right to data protection as one of the rights that exist among the full range of rights and freedoms impacted by the decision to process or not process data; therefore, they all should be considered. Prior to this view of proportional balancing, the IAF put on the table in the “ Unified Ethical Frame for Big Data Analysis ” the view that the applied ethic of legal, fair and just requires an analysis of the full range of fundamental rights and interests and consideration of society as a group of individuals. The IAF then further argued that the impact of not processing data also should be weighed with the impacts of processing data. This vision of proportionality among fundamental rights is reflected in IAF’s work in Europe, Hong Kong, and Canada on ethical assessments. Those assessments proposed questions that thoughtful organizations should ask. This concept of looking at a fuller range of fundamental rights was reflected in the European Court of Justice’s decision in the Google Spain case. There the court recognized that the fundamental rights to information and conducting a business are both rights that should be considered. Yet, neither the European Data Protection Board nor its constituent authorities have issued guidance or opinions that reflect the view that personal data involves a risk benefit or detriment analysis that is proportional among all rights and not singularly focused on the rights to privacy and data protection. This narrow focus on data protection and privacy is an issue that is much broader than the COVID-19 pandemic. Yet the virus has been a useful mirror on assessment and balancing processes that might be conducted in a more complete manner. In the end, excellent data protection needs to be in context and proportional to its impact on the full range of rights and interests. Proportionality across fundamental rights will be fully explored in future IAF work.
- Time for a Global Treaty
“What a revoltin’ development this is!” This exclamation is from the 1950’s sitcom, “The Life of Riley.” Today it is a great description of the general state of the data protection profession. In the “The Life of Riley,” Chester Riley’s classic tag line, “What a revoltin’ development this is,” described the endless rough spots in his and his family’s life. Pretty much like most of the dilemma we face where robust data use is conducted among a set of conflicting economic imperatives, security issues, and societal values. Schrems II is the current dilemma. I believe Omer Tene framed it best in his commentary last Friday, “ The Show Must Go On .” I have spent the last two weeks preparing for an IAF policy call on July 23 on the pathway forward after Schrems II. The Court decision did not surprise me at all. I have been working on data transfer issues from Europe since 1995 when the EU Directive was enacted. Back then the issues purely related to commercial data use. That was a simpler time. 9-11 changed that forever. The desire for government use of private sector data for national security purposes became and remain an imperative. Every country uses data pertaining to people for safety reasons in a world where threats are asymmetrical. Much of that data will come not directly from individuals but from opportunities to observe people, where that observation often is by private sector players. Every country will create safeguards consistent with its political culture and societal values. Every society has a prejudice for their values, and we, as people, judge those values based on our own situational lens. The Lisbon Treaty made the Charter of Fundamental Rights the keystone for the European Union. People must have “actionable rights” whenever their data is used. The bottom-line is that every transfer must provide those actionable rights. In its public statement last week, the EDPS said that individual actionable rights are not just a “European” fundamental right but a fundamental right widely recognized around the globe. I agree with this statement. However, the how and when is different from country to country. The General Data Protection Regulation provides European regulators with the obligation and authority to evaluate whether data exports will be protected in the importing location, unless the importing location has been found adequate by the EU Commission. Schrems II first and foremost reminds exporters and regulators of their obligation and authority. Schrems II was specifically about the United States. From this non-legal practitioner’s perspective, the court isolated on two issues. The first issue related to the necessity of bulk collections and whether those collections are proportional. On that issue, private sector players can conduct risk assessments and put in place additional safeguards and supplemental measures. The second issue, whether non-U.S. citizens, particularly European nationals, have standing for redress, from my perspective, is more difficult. If the answer to the second issue is no, is the question of likelihood even relevant? If the question is a matter of right to redress that would be required by European law when data is exported, I am not sure likelihood of collection is relevant. The question of government data collection for national security has been festering since the September 11 attacks in 2001 and was the topic of a massive investigation by professors Fred Cate and Jim Dempsey, “ Bulk Collection: Systematic Government Collection Access to Private Sector Data ” that included participation by academics from almost all regions. That investigation concluded that this bulk collection is a problem almost everywhere. At the end of the day, companies, even the most ethical ones, cannot stop governments’ interest in private sector data. While an international treaty is almost inconceivable, it is time to envision such a treaty. In May 2014, Jennifer Stoddart and I joined Fred Cate and Jim Dempsey in organizing a dialog on corporate accountability and government use of data. The orientation paper may be found here . The paper suggested a reframing of the essential elements of accountability for governments: Can the five essential elements of accountability be transposed into the governmental context? Without trying to be comprehensive now, here are some thoughts: 1. Organization commitment to accountability and adoption of internal policies consistent with external criteria. When applied to a government data processing program, this element may pose squarely the question of how adequate are the external criteria (that is, the law authorizing government demands)? The formulation used by the ECtHR is that a law must describe a governmental power precisely enough to protect against arbitrary application and to inform the public of which entities can conduct surveillance and under what criteria. 2. Mechanisms to put privacy policies into effect, including tools, training and education. “Tools” suggests use of audit trails, documentation, and permissioning systems for internal access and query. Further elaboration on the essential elements conducted in Paris, Madrid, Brussels, Warsaw, and Toronto suggests processes that assess the risks to individuals associated with new processing (including collection), and that mitigating those risks be part of the final processing plan. Such privacy by design practices should be part of an agency’s comprehensive privacy program. Training should probably start with an understanding of privacy and data protection, since the terms, although widely used, are often misunderstood. 3. Systems for internal ongoing oversight and assurance reviews and external verification. The Article 29 Working Party has specifically called for more meaningful oversight of intelligence agency programs involving collection and use of personal data. It said in its April opinion that the following good practices from the various oversight mechanisms currently in place in Member States should be part of the oversight mechanisms in all Member States: • Strong internal checks for compliance with the national legal framework in order to ensure accountability and transparency; • Effective parliamentary scrutiny; and • Effective, robust, and independent external oversight, performed either by a dedicated body with the involvement of the data protection authorities or by the data protection authority itself, having power to access data and other relevant documentation as well as an obligation to inspect following complaints. http:// 4. Transparency and mechanisms for individual participation. As discussed above, transparency means both public awareness of what data is being accessed as well as numerical reporting to indicate the scope of government access. The Article 29 Working Party stated: “Some form of general reporting on surveillance activities should be in place.” 5. Means for remediation and external enforcement. Remediation can mean judicial redress. In the US, the legal (and constitutionally based) doctrine of “standing” and the states secret doctrine make it very hard to challenge national security surveillance in court. The ECtHR has a much broader definition of standing, which might be a model. These five elements should be the starting place for a discussion of an international agreement that allows all governments to use private sector data for national security purposes but gives actionable rights to all people. By starting the discussion with these five elements, such an international treaty will address how governments act responsibly with respect to this type of data, how they demonstrate their actions and how people can obtain redress if governments fail to act responsibly. These steps put the responsibility back where the accountability lies and gives people actionable rights to hold governments responsible.
- Privacy Challenges of Small, Medium and Micro Enterprises
The IAF has focused on accountability that achieves fair processing when information is used for innovative purposes. This focus means most of IAF’s corporate participants are very large enterprises or are organizations that provide services for those very large enterprises. The exception has been Victoria Anzola’s training company, Seqqe, who works with many SMEs in Spain and Colombia. Victoria has joined the IAF team in seminars in Latin America and Europe and has brought the perspective of SMEs to the policy development table. Victoria shared with the IAF her notes on the privacy challenges of SMEs. With her permission, the IAF posted these notes on the Publications part of its website. The link is here .
- Privacy Law Must Focus on Consequential Harm
There is general agreement that privacy analysis and enforcement should be more risk based. The IAF increasingly is looking at risk in terms of consequential processing and at case studies to help it understand what is meant by consequential processing. Rental housing in the United States is a scarce resource, and rental housing usually cannot be obtained without a tenant screening report. While nothing could be more fundamental than a place to live, landlords are focused on their own risks when they receive these reports to screen applicants. Few areas of risk management have changed more since the Internet revolution than the process of tenant screening, a form a consumer reporting covered by the Fair Credit Reporting Act (FCRA). 25 years ago, a landlord would pull a credit report that covered loans and bankruptcies. Today, the landlord has available any one of the hundreds of tenant screening organizations that not only have access to a credit report but also to all government records that have been made public. Those data do not have the precise identifiers that have improved traditional credit reporting, and they do not have the appropriate level of oversight to make sure accurate information is being reported. Instead these organizations make use of less precise algorithms, fueled by less precise data that are maximized to not miss anything that might, or might not, be relevant to the applicant. In addition, landlords generally accept this reporting without any processes that ascertain the accuracy of the information. “ Algorithms that scan everything from terror watch lists to eviction records spit out flawed tenant screening reports. And almost nobody is watching. ” With those words, Lauren Kirchner and Matthew Goldstein began their Sunday New York Times story, “How Automated Background Checks Freeze Out Renters.” The authors describe data processing that is consequential to people’s lives. The consequences, such as potential homelessness, are more significant than the risks typically associated with AdTech as a result of offering products and services. [1] According to the New York Times article, the ease of access to public record information, such as criminal records, has made entry into the tenant screening business easy. Furthermore, the article says that many tenant screening services use very loose matching criteria, leading to false attribution of negative information to applicants. This practice leads to people losing the ability to have housing because the screening organizations have defaulted to a standard that favors landlords in tight markets. There is growing consensus that privacy law should prioritize risk. The IAF model legislation, the “ Fair and Open Use Act ” is risk based, with five bands of risk. Yet privacy practitioners do not always agree on what risks are meaningful and whose risks should be modelled. As advanced analytics driven by ever increasing data availability are looked to, the term consequential becomes even more meaningful, and few activities could be more consequential than putting roofs over the heads of children. In looking to legislation to model, the Fair Credit Reporting Act is seen as one of the most enduring pieces of fair processing legislation ever enacted. It both has facilitated a consumer economy and has provided rights to individuals when consequential decisions about them have been driven by data. The FCRA requires that data organized by third parties and used for substantive decision making be accurate, fair and used only for such decisions. The FCRA also gives consumers the right to know when they are denied credit, insurance, employment and housing based on a report for one of those reasons. It gives consumers the right to inspect the report, challenge the accuracy of the data, require incomplete or incorrect data be fixed or removed in thirty days, and require new information be reported to the business that made a substantive decision related to the consumer. When reports are used for employment, it requires the credit bureau to report negative information to the consumer as it reports it to the business so the consumer may dispute or put in context the negative information immediately. The FCRA requires maximum possible accuracy. This requirement should translate into reporting all the information that pertains to an individual and none of the information that pertains to other individuals with similar names, addresses, or other identifying information. There is no immediacy requirement in the FCRA for negative information in a tenant screening to be shared with the applicant when it is shared with the landlord, where a mismatched criminal record could block a lease. Where there is a housing shortage, a landlord will not wait 30 days to see if a tenant report will be corrected. So, individuals are harmed by inaccurate tenant reports. The result is adults and their children are not finding a place to sleep, share a dinner, study and play safely. Inaccurate tenant screening reports are consequential harm that requires attention. What should be a market differentiator among credit bureaus (including tenant screening organizations) is the quality of the algorithms that make the decisions to include or not include data. This condition goes to maximum accuracy. If tenant screening agencies loosen criteria to be inclusive of data that may or may not match to the consumer, inaccuracy is enforceable by the Federal Trade Commission. With hundreds of tenant screening services, the FTC does not have the manpower to police a very good but not perfect law. The FCRA could be made better by requiring the screening service to provide negative information to the applicant at the same time it is provided to the landlord. While the FCRA and FCRA enforcement are not perfect, they at least have the correct target. They are directed at fixing a defined consequential harm — being denied a substantial benefit based on inaccurate or badly processed data. Instead, privacy legislation in the United States is tending to what professors Corien Prinz and Lokke Moerel refer to as “mechanical proceduralism,” dependence on data governance by consumers on very complex processing that could not possibly be described in consumer understandable privacy notices. Nor should this be the backstop. The IAF is not suggesting that transparency is not important and that individuals should not have control where effective. Rather, privacy law should prioritize consequential harms and provide regulators with tools to provide certainty to actors that inappropriately harm individuals that they will be caught and punished. That approach probably means more resources for regulators. The IAF model legislation is focused on fair processing that regulates consequential harms. The IAF model also defines the powers that would be exercised by the FTC to achieve more certain enforcement. A future IAF blog will focus on consequential harms as it relates to the GDPR. [1] AdTech used to manipulate elections, as in Cambridge Analytica is different.
- Transparency Needs a Makeover
It seems ironic to be using the term “makeover” in the midst of the global pandemic as many of us are in need of a haircut and as my wife says “namaste” to hair dressers during Zoom yoga classes. But the term “makeover” fits transparency, a key pillar of trustworthy data processing which needs be both clearer and more clarified as part of building a foundation for a long-term, workable and adaptable legal framework. In a recent IAF blog on Demonstrable Accountability , it was posited that trust based on organizational accountability must be advanced. The blog went further to suggest that trusted governance based on accountability by the very organizations who are the stewards of the data, developers of technology and decision makers, was a foundational building block for a long-term, workable and adaptable legal framework. The blog also reiterated that while individual participation is key, a backward step to relying on consent would do little to provide or advance trusted data processing or data protection. Consent as a means to achieve an objective of transparency often does neither. It was flippantly suggested that some policymakers accessorize “consent” with gems like affirmative, express, clear, informed, specific, and separate. Once modified with trendy adjectives, consent is familiar, easy, and dressed up, but it is not ready to go. Why? If people don’t understand well enough to agree, object, or access rights and choices, how can they consent? Therefore, the underlying problem is the type of “openness” these transparency mechanisms have; the problem is not the governing mechanism. In IAF’s recent work in Canada on People Beneficial Data Activities , one of the recommendations to policymakers was the need for organizations to be subject to both internal and external oversight which is a form of verifiable transparency. There are several other reasons why transparency needs a makeover. First, the objectives of transparency are not clear, and the term itself is not defined. While “being transparent” is an explicit and implicit requirement embedded in the GDPR, and despite many specific areas of communications that are required in the GDPR which must meet a standard of also being fair, transparency is not defined. By extension, the objectives of transparency are not clear. In the Article 29 Data Protection Working Party’s Guidelines on Transparency (A29WP Guidance), this gap is acknowledged. One objective of transparency is to enable trust through “openness.” Second, transparency is bundled with other requirements and with other communications vehicles. In Europe, the GDPR and the Directive before the GDPR, treat the constructs of consent (article 6(1)(a)) and transparency (article 5(1)(a)) as separate requirements. However, over time and in practice, both constructs have been bundled together in privacy notices so that they either implicitly or explicitly double as consent (e.g. Click I Agree). The A29WP Guidance states “transparency, when adhered to by data controllers, empowers data subjects to hold data controllers and processors accountable and to exercise control over their personal data by, for example, providing or withdrawing informed consent and actioning their data subject rights “. This statement conflates an objective of transparency. The U.S. has not only been more explicit about combining consent and transparency in “Notice and Choice” models, but this approach has been codified this in some sectoral laws such as the Gramm Leach Bliley Act (GLBA). Not only did regulations under GLBA create standard, template notices, but they have been implemented by literally thousands of financial Institutions. The result is they serve no value either in terms of increasing transparency or in terms of explaining complex data uses. Combining transparency with legally driven communications requirements results in legal communications not at all suited to context or the goal of openness. Third, the complexity of technology and associated data flows and uses is becoming simply too difficult to meet the simple objective of transparency. For example, in the case of AdTech, the complexity of players and data flows and by extension “explainability”, is perceived to inhibit the ability of individuals to exercise their right to object to receiving an ad or the profiling necessary to deliver the ad. Regulators believe that the processes are so complex and the descriptions so obtuse that individuals are not knowledgeable enough to exercise their rights. This criticism not only bundles the objective of transparency with legal requirements, as noted above, but it also suggests an objective of transparency is “verifiability”. In Artificial Intelligence (AI), transparency is increasingly subsumed by “interpretability”. As highlighted in their paper Toward Trustworthy AI Development , the authors note “AI systems are frequently termed “black boxes” due to the perceived difficulty of understanding and anticipating their behavior. This lack of interpretability in AI systems has raised concerns about using AI models in high stakes decision-making contexts where human welfare may be compromised. Having a better understanding of how the internal processes within these systems work can help proactively anticipate points of failure, audit model behavior, and inspire approaches for new systems.” But this lack of understanding actually raises a different objective of transparency other than being able to explain what is going on. It suggests in addition to helping people understand the reasons for doing the processing and the means to achieve those objectives, transparency and interpretability help achieve “verifiability.” While the authors note there is a problem that is compounded by a lack of consensus on what interpretability means, the goal is to “verify claims about “black box” AI systems that make predictions without explanations or visibility into their inner workings. They go on to suggest that “a model developer or regulator may be more interested in understanding model behavior over the entire input distribution whereas a novice layperson may wish to understand why the model made a particular prediction for their individual case.” This discussion highlights there are entirely different solutions and goals for each audience and for each objective of transparency. As these three issues illustrate, If trusted governance is based on accountability by the very organizations who are the stewards of the data, developers of technology and decisionmakers, then more transparency as to their stewardship and decision-making processes would not only be desirable but would be necessary to be both responsible and answerable. This is demonstrable accountability. Demonstrable accountability reveals the need for a makeover of transparency, especially given the complexities of technology and data processing, the goals of individual participation and the need for more transparency of data stewardship processes. In short, the means and objectives for each of these three objectives of transparency should be rethought, and the current focus on transparency as providing information just to individuals should be broadened. Individuals may want to know what is going on, but including these descriptions in long complex notices, often drafted with legal obligations in mind, fail to actually deliver on any meaningful objective and are next to impossible to provide any meaningful information in complex data or technology scenarios. If one needs an advanced engineering degree to understand the notice, the average individual will not have a clue. Transparency should be thought about from an audience and communications objective. This construct was first explored in the IAF’s Effective Data Protection Governance and leads to three recommendations: 1. Individual Participation – Individuals have a right and a desire to know about data processing about them. Individual participation can provide them access to specific choices but does not necessarily do so. Due to the complexity of the different ways data is processed, the different purposes, the different processors and the different technologies that may be involved, the timing, method, content and format is and should depend on the circumstance. Therefore, communication methods should be approached in entirely different ways if the objective is to inform individuals when it should matter. For example, both Microsoft (see Privacy Dashboard ) and Google (see Privacy Controls ) have a very consumer focused approach to informing and involving their customers. TELUS has recently provided very informative content in a consumer centric way covering Data Analytics . The development and delivery of context-based transparency should be required and should not be conflated with consent or other legal notice requirements. The goal is transparency for “explainability.” Organizations could be required to develop these forms of context-based vehicles of explainability without being told how to do it. 2. Privacy Notices – These notices are principally designed to meet a legal objective. They should be drafted as a comprehensive statement for regulators and others interested in the details around processing. They should be accessible to individuals, but individuals are not the targeted audience. They are written by lawyers and are intended for lawyers. The goal is transparency for verifiability ( accountability). Organizations could be required to provide these forms of notice, much as they do now, but they should be independent of explainability communications. 3. Demonstrable Accountability – Organization should demonstrate transparency to the public, to employees and to the government. For example, on the organization’s website, an organization may disclose the types of complex advanced data activities in which the organization engages, how data are used to achieve each beneficial purpose and the types of third parties to which personal data may be transferred in order to achieve each people beneficial purpose; a description of the governance processes it employs (e.g. policies and procedures) regarding data activities; a description of its assessment process, including how benefits and risks are determined and assessed; what types of oversight are use, etc. The goal is transparency for verifiable demonstrable accountability. Organizations could be required to provide this type of transparency. Transparency is a lynch pin for both individual participation and demonstrable accountability. How it should be applied as part of a goal of trusted data protection needs to be rethought. These concepts should be considered by policymakers as part of building a foundation for a long-term, workable and adaptable legal framework. The IAF is committed to an ongoing dialog on the multiple types of effective transparency.
- Take the Long View: Demonstrable Accountability
Every year at this time the Ten Commandments is aired on network TV and every year many of us end up watching at least part of Hollywood’s dramatic interpretation of the Exodus from Egypt as told in the bible. Regardless of your faith, it’s a must-see movie just for the “camp” and glitz of the Golden Age of Hollywood. There’s a famous scene when Moses, played by Charlton Heston, climbs Mt. Sinai to receive the Ten Commandments from the Lord. When Moses doesn’t return immediately with new laws, the Children of Israel lose their patience and faith, leading them to return to the old, discredited ways of the Egyptians. You know what comes next, right? They built a golden calf, which not only marked a step backwards in thought, it proved to be a monumental waste of gold and other precious metals. That scene—with the drama and outdated special effects–is about impatience and returning to discredited theories in the face of anxiety and uncertainty. We’ll leave other themes to religious leaders. As the decade flipped to the 20’s, we thought there was near universal agreement among policymakers, regulators, civil society, and commercial actors that individual consent was increasingly less effective as a mechanism for governing and green lighting complex data processing necessary for digital progress and innovation. The growing consensus and recognition that new approaches to managing data protection in the Fourth Industrial Revolution struck many—including us—as real progress. The GDPR’s risk-based approach to data protection and its pivot to ways other than consent to achieve lawful and trustworthy data processing were positive signs, welcomed by diverse stakeholders. But recent developments seem to suggest that less than two years after the EU hit the start button on GDPR some regulators have already lost patience with the risk-based framework. The IAF filed comments on the UK ICO’s draft code of practice on direct marketing (Draft Code) whose conclusion seems to lead to requiring consent for all marketing without any proportionality test. In the US, many policy makers seem fixated on consent, in part because it’s a familiar concept that’s relatively easy to grasp and draft, even if the application of consent does not produce long term solutions to the myriad issues raised by today’s complex processing. Does this retreat from progress signal a return to worship at the feet of the just rejected golden consent model? We hope not. There’s too much as stake for everyone involved, including consumers. Much of the future’s collective economic growth, benefiting both individuals and society, will be driven through increasingly complex digital transformation. We don’t need to reference reports anymore and no one is challenging the notion that our future will be revolutionized through technology such as the Internet of Things, 5G networks, cloud computing, big data, artificial intelligence, blockchain, and super computing power. Many of these digital trends involve complex data, data use and technology that are increasingly challenging to understand and by extension place tension on traditional models of data protection. We call this the “duh” paragraph because it’s not necessary to say but it sounds good and tends to be the one paragraph everyone agrees on. Here are a few more “duh” statements for your consideration: We need to maximize the societal benefits of processing while minimizing risks Data use can and will produce phenomenal benefits Data use can also cause real harm, but the risk of harm can be managed and mitigated Trust is critical for success and we don’t have it We are not going to have trustworthy systems, ethical processing or responsible data practices by requiring companies to leverage notice and consent. In a world of observation through billions of sensors, complex data flows, inferences and answers produced by sophisticated machine learning, and evolving business models and technology, meaningful consent prior to the use of data is near impossible. It’s foolish to pretend otherwise and we’re not persuaded by those who argue (disingenuously we believe) that we’re not giving consumers credit for knowing what they want. Consent mechanisms can be easily manipulated, and they place the burden on individuals who are not equipped, even under the best circumstances, to make informed choices on the fly. We end up with false choices and pretty colors, not trustworthy business practices. To be super clear, we’re not suggesting that individual participation in data activities that have an impact on people is not important. To the contrary, in certain contexts it will be essential in our rapidly evolving, data-driven marketplace. Recent improvements to consent mechanisms regarding the collection of precise location information on mobile platforms is an example where consent can play an important function. But shifting accountability to individuals across the ecosystem through mechanisms like consent is not effective to drive responsible and trustworthy processing, much less eliminate adverse consequences that may derive from the collection and use of data. Sometimes it appears that consent is an uber stylish, haute couture approach to prohibit certain data processing practices. Of course, policymakers may decide that certain uses of data should not be permitted. That’s fair. We elect and appoint these individuals to make those tough decisions. But if that’s the objective then please state that so that stakeholders understand the policy and can develop a path forward. Regulators shouldn’t design impossible to satisfy requirements for consent to produce the same outcome. Of course, producing a list of “thou may” and “thou shall not” is difficult, controversial, and often produces unintended consequences, bringing decision makers right back to the difficult, knotty place they hoped to avoid. It’s a tired story isn’t it? We often feel like we’re going in circles. When the going gets tough policy makers get going – right back to outdated consent. In many respects we’re sympathetic to the challenges policymakers face and we’re not suggesting they have an easy task. To be fair, regulators are rightly skeptical and frustrated with some business-driven approaches to the challenge of risk-based, accountability type models. Part of it may have to do with the way aspects of the GDPR have been implemented by business. For example, in discussions with some Regulators they have commented they have not seen many good examples of a proper “balancing of interests” made operational by business. Put another way, they are expressing a question as to the effectiveness of actual “accountability” processes within the GDPR as implemented by business. Some of their concerns are justified. Popping out a formulaic risk assessment by pushing a button on automated “do it your self” privacy software is not what they had in mind, yet we’ve all seen those solutions. And so, like the top fashion designers in Paris and Milan, they accessorize “consent” with gems like affirmative, express, clear, informed, specific, and separate. Once modified with trendy adjectives, consent is familiar, easy, and dressed up but it’s not ready to go. We’ve committed privacy heresy here. We’ve attacked the foundational principles of consent and individual control. We’ve also asserted that line drawing is near impossible and almost always produces unintended consequences. We’ll add one more unorthodox point of view. Checklist approaches to privacy and data protection do little to actually manage risks and prevent negative outcomes. So now what? How about a little honesty first? Business models, technology, data flows and data are complicated and ever evolving. A complex solution that contemplates the nuances of different practices and the context of processing is going to be needed. Second, any path forward that does more than put lipstick on a pig is going to require serious investment of resources by industry. Full stop. There are no cheap seats in the big data arena for responsible actors. In order to enable robust use of data, responsible actors must demonstrate their commitment. Here’s the kicker. There’s really nothing new here. Organizations have had the ability to address and manage risks related to data processing all along. No one has wanted to do it. At least not in a serious, continuous, comprehensive and demonstrable way that can be articulated to anyone outside the business. This reluctance has contributed to the credibility chasm with regulators. Rightfully so. Should anyone care? Yes, because it seems that the result is more prescriptive guidance and (less effective) requirements that will, in some cases, negate the ability to create beneficial value from data. In the process the ultimate loser is the consumer, the very individuals the regulator wants to protect, and the business wants to pursue as a loyal customer. There must be a way out, or at least a path forward. We think there is. It is a framework based on demonstrable accountability, which incorporates data governance, risk assessment, verifiability, and enforcement. We need trusted governance based on accountability by the very organizations who are the stewards of the data, developers of technology and decision makers. What’s holding back a greater emphasis on accountability? A lot. Part of the tension and circularity of thinking may have to do with a combination of the growing complexity of data driven activities with general trust issues with business. We summarized some of the trust issues above but in sum it comes down to this: some regulators and policy makers increasingly think some in industry are full of it when it comes to privacy and data protection. There are a number of steps business and regulators must take to advance trust based on organization accountability. While individual rights and participation remain key as outlined in the IAF’s suggested principles that update both individual participation and organizational obligations, clearly a retrenchment or a bias to mechanisms such as consent are not the answer. A tilt to trusted accountability as an effective governance mechanism is really the only way to achieve the promise of digital transformation and business needs to invest more. There are steps that can help bridge this gap and move us forward (and not backward): Enhanced Accountability – Business must adopt and make operational the elements of Data Stewardship Accountability , particularly for advanced and complex data driven innovation. As Stephen Wong, Hong Kong’s Privacy and Data Protection Commissionaire noted, “In order to be able to transform data into information and information into knowledge and insight and knowledge into competitive advantage, and in order for individuals to be able to trust data processing activities that might not be within their expectations, enhanced data stewardship accountability elements are needed”. Demonstrable Accountability – while the GDPR requires organizations to be able to “demonstrate” they are meeting the law’s requirements, increasingly and in particular for advanced and complex data driven activities, organizations will need to be able to proactively show accountability in “demonstrable” ways. The IAF’s recent report addressing A Path to Trustworthy People Beneficial Data Activities included steps to achieve demonstrable accountability. External Oversight – an element of demonstrable accountability includes oversight. The IAF believes this is a key element of trusted accountability and recognizes that further work is needed to determine the degree and type of effective and efficient (and cost effective) oversight and governance for organizations processing data in complex scenarios. Effective and efficient oversight and governance are necessary for these types of processing need to be fully trusted. Codes of Conduct – may provide a path forward as they can outline the key accountability requirements in a given data processing activity or for a specific industry data activity. These efforts should be encouraged by business and regulators alike, with reasonable tolerance for “test and learn”. For example, some have called for prohibitively complex and costly “certification” schemes that will inhibit interest in codes that could actually advance trust. The enemy of the good is the perfect. These are but four paths forward: all part of building trusted accountability. Of course, they need to be backed up through enforcement. We need to advance these and learn together. A shift backward to more consent will limit data driven innovation and in reality, will not provide the goals of autonomy and control these mechanisms where intended to meet. Business not investing and being willing to demonstrate accountability mechanisms and/or regulators demanding costly prohibitively complex oversight will kill forward progress and ultimately inhibit the benefits we all will receive as part of a digital transformation. If you’re thinking that our ideas don’t seem possible in 2020, you’re right. Instead of trying to solve last year’s issues with this year’s reinterpretation of consent we are trying to lay the foundation for a long-term, workable and adaptable legal framework that will be relevant and fully implementable in 2030. We think that if we all take a longer view, and don’t react to the privacy issue of the day, we’ll reach a solution that genuinely protects consumers while fostering innovation, growth, and data processing necessary for the future. We need to advance Trusted Governance Based on Accountability.
- Bermuda Report on Information Accountability
Privacy and data protection laws are filled with concepts that are notoriously difficult to put into tangible effect. Privacy itself is often defined, somewhat amorphously, as “Protection against intrusion” or “respect for private life.” Laws and regulations draw on principled statements and evoke laudable goals such as fairness or accountability. But what does all this mean in practice? How does a person sitting at her desk respect one’s private life or show fairness? She has a piece of paper with information on it, and has to do something with it. What actions should she take, and how does she evaluate her success meeting the principles of privacy? Of course her lawyer would tell her, “It depends,” and fair play to that. It does depend, on factors like the type of information or the way she hopes to process the data. Regardless of the specific actions she chooses, she needs a programmatic way to demonstrate her good accountability. Accountability is a concept that many of us have a common sense understanding of: the idea that actions have consequences or that someone will have to answer for why they chose to take any given course. Like its sibling privacy principles, that idea may be less intuitive to understand from an action-oriented perspective. When you break accountability down into its constituent parts, it becomes a road map not only for how to respect privacy, but how to structure a successful, ethical program: When many organisations access data, which one is in charge of what happens to the data? Who within an organisation ultimately makes decisions about what is done? Who executes those decisions? Who answers to the public? Who would the subject of that data call or email if they had questions? How does an organisation make these decisions? How does it ensure its staff or partners follow standards or receive the training they need? How does an organisation show its work–both to those who trusted it with their data, and to the supervisory entities responsible for monitoring their performance? Luckily for us all, regulators and policy-makers have been posing these questions about as long as the field of data protection has existed. The Bermuda Report on Information Accountability surveys the history of accountability from its origins and from (almost quite literally) the four corners of the globe. It describes the evolution and formalisation of accountability as a core privacy principle, essential to the success of private organisations as well as their regulatory environments. For all those people sitting at desks with pieces of paper in front of them, the Report provides tangible examples of both “building block” and ongoing steps to ensure a successful privacy program. Organisations engaged in cutting-edge, advanced data processing through machine learning or artificial intelligence should pay special attention to the “Enhanced Data Stewardship Accountability Elements,” which provide a framework for building and maintaining strong ethical standards for decision-making even in environments where the processing is unthinkably quick or comprehensive. I offer my thanks to the team of the Information Accountability Foundation , particularly lead writer Lynn Goldstein, and look forward to continuing this vital conversation. Alexander McD White Privacy Commissioner
- A Pivot (Back) to Accountability
In a recent article , Sheila Colclasure, Senior Vice President and Global Public Policy Officer at LiveRamp, wrote: “If you want your company to exist now and in the future, you will have to think and act with data. . . . With this [responsibility] comes accountability . . . . Business leaders must think strategically about the reality of becoming data-driven in a responsible way to deliver benefits and prevent harm.” This pivot (back) to accountability is reinforcing “accountability” as a foundation of data protection. This is happening for a number of reasons. The most important reason is the growing use and impact data is having on individuals and society. However, it is useful to reflect back on the initial focus on accountability through the lens of the Global Accountability Dialogue to explore why this pivot it is taking place. History of Accountability Accountability has been a data protection principle since the 1980 adoption of the OECD Guidelines. Canada adopted accountability explicitly as part of 1996 standards and the 2000 Personal Information Protection and Electronic Documents Act. APEC adopted the OECD accountability principle as part of its 2004 privacy framework. Accountability was implicit in the European country laws enacted after the 1995 EU Directive. The Spanish Data Protection Agency’s Joint Proposal for an International Privacy Standard included accountability when it was released as the Madrid Resolution. Despite its repeated recognition as a critical component of effective data protection, there was no clear definition or guidance on how it might be demonstrated or measured. This was the genesis of the Accountability Project [1] initiated in January 2009 by an international group of experts from government, industry and academia to define the essential elements of accountability and consider how an accountability approach to information privacy and data protection would work in practice. At that time, while there was an emergence of the importance of data to economic development, the issue of the day was centered on cross border data flows and, at some level, the need to create accountable and interoperable data protection systems. What emerged, as a result of this project in jurisdictions such as Canada and Hong Kong, was specific guidance to business as to regulator expectations on accountability. This guidance, and in the U.S. more explicit laws and privacy enforcement actions by, for example, the FTC, lead to, at least anecdotally, more mature privacy programs inside organizations. For example, and again anecdotally, the adoption of Privacy Impact Assessments seemed to be more prevalent in Canadian and in U.S. organizations than in Europe. Despite the recognition that privacy and data protection programs require a programmatic approach to accomplish their core objectives and despite the many specific process and procedure requirements embedded in the GDPR, a summary of those requirements tied to “accountability” does not exist. And perhaps more relevant, there has been a relative lack of guidance to European business as to the design and components of an accountable, comprehensive data protection program. Business Environment Today In 2019, we are in a different place relative to the importance data is playing as a driver of economic growth and innovation and as an integral part of our daily lives as individuals and consumers. Sensors, Artificial Intelligence, Machine Learning enabling advanced analytics and decision making are now mainstays of our digital environment. The importance of data is only going to increase. According to PWC in its Trusted Data Optimization work, 86 percent of businesses say 2019 is the year in which they will race to extract value from data. These companies see the potential on, average, to reduce total annual costs by a third and to increase incremental revenue by over 30 percent. These same businesses also highlight a number of challenges to realizing on this goal. In addition to core data problems such as reliability, 33 percent of businesses cite their inability to address new regulations (and emerging requirements such as ethical data processing) affecting privacy and data protection as a barrier. One way to summarize the current state is: data-rich, but information-poor and inadequately protected . 60 percent of business leaders lack full confidence or certainty that their company has a comprehensive program to address data security and privacy, and only 18 percent of CEOs strongly agree their organization is adapting the way it monetizes data to better address data privacy and ethics. Accountability Today It is clear that for businesses to succeed in optimizing data in a trusted way, they will need to enhance accountability mechanisms both as a way of facilitating internal risk decision-making needs and as a way of meeting market place expectations. The more data intensive a business is, the more risk it creates and by extension the more programmatic ways it needs to allow for trusted data optimization. The Colclasure article encompasses many parts of the IAFs work on Data Stewardship Accountability . Colclasure describes the need for several operational components that map to IAF work such as ethical data impact assessments and oversight and ethical sourcing and partner accountability. These “data” trends have advanced other trends in data protection. In 2009 the focus on accountability was positioned as supporting the way individual rights had been thought about. These rights, at the time, were heavily centered around individual control. Today, the complexities of data flows and use have evolved the way individual participation and organizational obligations are thought about. This has led the IAF to suggest principles that update both parts. The first part describes the rights necessary for individuals to function with confidence in our data driven world. The second part is focused on the obligations that organizations must honor to process and use data in a legitimate and responsible manner. In the IAF’s blog, Which Accountability Category Do You Fit In , there were two challenges addressed. One was the challenge of translating principles into legislation and the other was the question of whether an organization’s level of data processing required core accountability elements or Accountability 2.0 (Data Stewardship Accountability) that addresses fair or ethical data processing. These two challenge areas are inter-twined. But as the IAF works though the details to address both sides, it is clear the business need and public policy pivot is going to increase the relative focus on accountability. While the IAF is helping with research and education to assist in effective and reasonable public policy in this space, smart businesses are getting ahead of the trend to best facilitate trusted data optimization by implementing accountability 2.0. [1] The Accountability Project eventually was incorporated as the Information Accountability Foundation.
- Demonstrable Accountability and People Beneficial Data Use
Data driven societies and economies must create a means for data that pertains to people to be used in a productive and protected manner. Those methods include processing so complex that people need specialized knowledge beyond many people’s ability in order to effectively govern that processing through consent. If consent is beyond a person’s capabilities and is then less effective for permissioning data processing, then a trustworthy alternative to consent must be found. An alternative is especially necessary when processing creates real value for people, groups of people and society. Use of an alternative means of permissioning does not mean a reduction in transparency. Transparency, robust corporate data governance, subject to the law, and internal and external oversight are necessary to hold organisations accountable. The Canadian Ministry for Innovation, Science and Economic Development (“ISED”) shares the IAF interest in this issue and helped fund a multi-stakeholder project to suggest “ A Path to Trustworthy People Beneficial Data Activities.”[1] Canadian private sector national and provincial privacy laws typically require consent in order to process personal data. Newer privacy laws such as the European Data Protection Regulation specify that personal data processing must be based on a legal basis, and consent is only one of the six legal bases. The IAF’s task was to set forth the elements of an accountability process for situations where consent is not effective, but the processing creates tangible, well-documented benefits for people. This process is demonstrable accountability which, among the requirements, includes an assessment process that balances the risk of harm and the benefits to people. The components of a demonstrable accountable process include: Designating senior officers who are accountable for People Beneficial Data Activities Conducting People Beneficial Impact Assessments (PBIAs) Achieving an enhanced standard of transparency covering People Beneficial Data Activities and their associated governance processes Having specific internal oversight Being subject to independent external oversight Keeping records of People Beneficial Data Activities Protecting individual rights and implementing transparent redress systems The IAF suggests all of these components must exist for people beneficial processing to be trustworthy. The IAF report includes recommendations to ISED for them to consider as they draft any proposals related to existing Privacy laws that might be considered by Parliament: Explicitly recognize People Beneficial Data Activities as serving a legitimate purpose because on balance the activities are beneficial to people when the risks to people are reduced to an acceptable level. Limiting People Beneficial Data Activities to those provided within the “conditions of the supply of a product or service” might contribute to personal data not being used for people beneficial purposes. Explicitly move beyond consent as the primary authority to process personal information for People Beneficial Data Activities where consent is not fully effective. Explicitly recognize People Beneficial Data Activities as a new or expanded authority to process personal information beyond reliance on the concepts of consent and legitimate purposes tied to the provision of products and services. This recognition would lessen reticence risk (i.e. a reduction of any inhibition to data driven innovation) and would provide more benefits to stakeholders because these activities have not been limited to those provided within the “condition of the supply of a product or service,” as long as the People Beneficial Data Activities meets all the elements of demonstrable accountability and are aligned with the objectives, culture, and values of the organization. Expressly provide a method for determining what data activities are people beneficial and provide clarity regarding this processing through public policy. This express provision will reduce reticence risk to people, society and organizations and help to implement Canada’s Digital Charter while protecting the privacy of Canadians. So that People Beneficial Data Activities are transparent, expressly provide in policy requirements the elements of demonstrable accountability: be accountable, conduct a people beneficial data impact assessment (PBIA), be transparent, have internal oversight, be subject to independent external oversight, keep records, and protect individual rights. The IAF also suggests that further work is needed to determine the degree and type of effective oversight and governance for organisations processing data for people beneficial uses. Effective oversight and governance are necessary for this type of processing to be fully trusted. Canadian businesses contributed to the research and the development of the report and participated in a multi-stakeholder session that included academics, privacy advocates and regulators. While this project was specific to Canada, the IAF believes the findings are applicable to other jurisdictions. Europe is struggling to make legitimate interest work as a trusted legal basis to process data, and the United States is only beginning the public policy process for advanced data use. The IAF will be updating its prior work on legitimate interests based on the people beneficial data activity report. The project report, which includes a model assessment process, can be found here . Please let us know what you think. [1] The IAF is solely responsible for the report findings. They do necessarily reflect the views of ISED, participants, or the IAF Board.
- Knowledge Discovery Alone Is Not a Similarly Significant Effect
New knowledge drives mankind forward. Sometimes the knowledge is used wisely; sometimes it is not. Sometimes inappropriate uses have negative impact on individuals. Data, much of it relating to individuals, is key to the generation of knowledge. However, it is important to separate out the distinct functions of data driven knowledge creation, knowledge discovery, and the application of the resulting knowledge, knowledge application. The GDPR encourages knowledge discovery but also requires the accountable process of impact assessments that identify risks to individuals and that make sure those risks are documented. The best public policy is one that encourages knowledge discovery, even at the commercial level, and knowledge application in a legitimate and fair fashion. The IAF team is concerned that society is sleep walking into an era where knowledge discovery will be precluded by a restrictive reading of the data protection law, especially with respect to knowledge application. For that reason, the IAF filed comments on the UK ICO’s draft code of practice on direct marketing (Draft Code). The IAF is concerned that the Draft Code suggests the mere processing of data to generate insights, without a sense of tangible negative effects, will be considered to have consequential effects. By extension, the requirements of the GDPR relative to these effects extend these same requirements to knowledge discovery where there is often less direct impact to individuals. For example, one issue in the comments is profiling to segment markets. What is at issue is when does profiling have legal or significantly similar effect? Part of the IAF’s comments are as follows: Underlying the IAF concerns are the differences between privacy and data protection as fundamental rights. The right to privacy relates to individual autonomy and family life while the right to data protection relates to the risk to people arising out of the processing of data pertaining to them. The right of individuals to control their data, as a privacy right, is always important, but it is particularly so in instances where individuals should have the ability to protect themselves and their families and to form and socialize new ideas with a small circle of chosen friends. Consent as a governance mechanism works most effectively in situations where individuals knowingly provide data. Increasingly, data have their origin either in individuals’ interaction with the world (observed) or in the insights that come from processing data (inferred). The legal basis for that processing increasingly is legitimate interests or fulfillment of a contract. In those instances, the processing must be fair. Fairness includes transparency, and transparency is challenging in the direct marketing ecosystem. There is room for improved transparency in the direct marketing ecosystem. Fairness also requires a series of assessments to determine that data bring value to people and do not cause actual harm. The General Data Protection Regulation (“GDPR”) created data protection impact assessments (“DPIAs”) to make sure organisations considered both benefits and harms to stakeholders when processing data. Individuals benefit from competitive markets, so it is reasonable to consider whether less competition because of overly cautious interpretations of data protection law creates harms to individuals that are tangible. As stated earlier, observation has become overly ubiquitous in today’s society. The IAF believes that the movement to limit third-party cookies will have some societal benefits in this area. However, even with those changes, the technology and processing behind market segmentation will be complex and understanding that process will not be most individuals’ main concern. So, the role of organisations and regulatory agencies becomes more important. Organisations must conduct assessments at almost every stage of the processing and must be able to demonstrate those assessments were conducted in an honest and competent fashion. Regulators most oversee and enforce substantially enough so organisations believe the likelihood of enforcement is high. The segmentation process uses probability to segment individuals into cohorts of those likely to do something and those that are not likely to do so. Segmentation logically fits into the GDPR’s definition of profiling. The GDPR requires consent where the profiling has legal and similarly significant effect. It is IAF’s view that a lack of individual awareness of the robustness of the processing alone does not meet the test of being a similarly significant effect. Similarly, significant effect may come from the actual use of insights to make decisions. DPIAs are designed to identify similarly significant effects, justify or mitigate them, and document the outcome. The IAF sees indications in the Draft Code that the ICO is leaning in the direction of finding that the processing of data for segmentation has significant impact on the individuals the data pertains to. The impact on the societal value brought by direct marketing by requiring knowledge discovery to be subject to consent would be negative and therefore have a negative impact on individuals. While these comments are directed at the ICO, there are indications that other data protection authorities may have similar views. Knowledge discovery may create insights that are detrimental to individuals when used in an inappropriate fashion. This type of potential risk is why the GDPR is “risk based” and requires assessment of risk. But to restrict profiling and knowledge discovery to only where consent is an effective governance process creates reticence risk. The assessment and balancing of risks through processes as outlined in the GDPR, conducted honestly and competently, is the better answer. The IAF will be scheduling a policy call on March 19 to discuss the issues raised by the Draft Code.
- IAF Releases Model Legislation Summary
The California Consumer Privacy Protection Act went into effect on January 1, and a ballot initiative to update that law is slated for November. State privacy legislation has been reintroduced in Washington state, and Nevada is following. Privacy Shield may be overturned by the European Court of Justice, and more countries are adopting legislation that requires adequacy for transfers. Pressure for the United States to enact comprehensive federal privacy legislation continues to accelerate, and more Congressional committees are trying to determine what type of privacy legislation should be enacted, and what would it take to break the political logjam. The United States needs comprehensive federal privacy legislation. However, it needs privacy legislation that does not look backward to yesterday’s issues, but rather looks forward to the issues that will emerge in the next decade. With that in mind, the Information Accountability Foundation (IAF) published a model bill in 2019 called the “ Fair Accountable Innovative Responsible and Open Processing New Uses that Secure and Ethical Act ” or the “Fair and Open Use Act.” The linked eight-page summary is a guide to reading the key points in that model legislation. The model legislation is based on the core principles that organizations must be responsible stewards of data that pertains to people, so the benefits of an information age belong to everyone. It specifies the FTC as the enforcement and oversight agency and provides the FTC with the mandate and resources to conduct both tasks. The model bill includes individual rights but does not put the onus on individuals to enforce the law through consent and complaints. Additionally, the model legislation: Requires risk mitigations through assessments, and links the risk mitigation to five risk bands; Covers all data that is personally impactful, not just personal data; Defines observed and inferred data with an understanding that data is increasingly observed and created, not just collected; Is globally interoperable because it requires that data can be processed only for legitimate purposes and defines those purposes; and Defines the obligations of an accountable organization, and requires the organization to be demonstrably accountable It is not the IAF’s intent to enact this model legislation; rather the IAF’s intent is to use the model language to inform the legislation that is being written by others. The IAF wants to work with all stakeholders to enact legislation so that the benefits of the information age belong to everyone, and so that, in today’s data-driven economy, organizations are responsible stewards of personal data and are accountable for their actions. It is the IAF’s desire to use this summary of the model legislation to open the door to understanding legislation that is first and foremost aimed at responsible use of data that pertains to us by organizations in a demonstrably accountable manner. Please let us know what you think.
- Digital Activities go Beyond Privacy and Data Protection
Sunday, November 10, the New York Times ran a story on the ability of bad persons to hide and distribute child pornography on the Internet. Tuesday, November 12, the New York Times ran a story on a unit of Google assisting Ascension, the second largest U.S. health organization, to mine data on millions of patients to look for new insights from the information. These activities are perfectly legal, but they are not transparent to the patients. The data seems to be used for good but is used in an environment where secrecy leads to suspicion. The recent International Conference of Privacy and Data Protection Commissioners discussed the misuse of observed data for many purposes but focused on how such data leads to micro targeting in elections to put liberal democracies at risk. Since the conference, the willingness for companies facilitating digital political ads to police out and out false information in those ads has been debated by Facebook and Twitter CEOs. It is hard to argue that any of these three issues are pure privacy and data protection issues. Instead, they go to digital responsibility or maybe even digital civility. The focus of the appropriateness of digital activities will increasingly not be about technical compliance but rather about what is responsible. As New Zealand Privacy Commissioner John Edwards recently argued in a keynote address for the IAPP, these issues, particularly for smaller countries impacted by technology from larger trading partners, are about the preservation of cultural values. Fair processing, which is required in some form or fashion by all privacy and data protection laws, requires organizations to own the risks they create for others. This includes the risk that individual autonomy may be lost in the digital world but also includes the risk to the full range of personal rights and interests impacted by processing or not processing data. Increasingly, organizations are being required to balance that full bucket of risks, mitigate the injurious risks where possible, and determine the risks that must be managed to gain the societal benefits that may only come from digital knowledge that drives smart decisions. What I am suggesting is the moral obligation for organizations must now go further. It goes to the impact of information and information processing on all aspects of society. For the moment, the term that comes to mind is digital responsibility. My concern is that having a corporate silo or a government enforcement agency labeled privacy enhances the sense that the preponderant risk that must emphasized is the ability of individuals to control their digital footprint. This orphans other data related risks. Individuals having control of their data might be a perfect state, but it doesn’t capture the world we have lived in since observation has become increasingly necessary to make things work and data have become the driving force behind knowledge creation. I am not saying that our digital footprint should not be better respected and that minimization where appropriate should not be a regulator objective. But more and more the vocabulary for our missions should reflect the fairness which is part of all laws governing the use or nonuse of impactful data on not just individuals but society as a whole. I would like to see a movement towards a recognition that privacy is just an aspect of digital responsibility. I would like to see the day when a CEO turns first to her or his senior leader for digital responsibility when the organization does something different in the digital age. At times, it will be privacy. At other times, it will be content mediation. And at still other times, it will be the impact on individuals at risk. From my perspective, there needs to be a recognition that the risks we are facing today go beyond those related to privacy and data protection failures and extend to digital responsibility failures as well.
- IAF Releases “Advanced Data Analytic Processing – 2019 Update”
Central to the work of the Information Accountability Foundation is the concept that using data to discover new insights about people raises a different set of risks than using data to make decisions about people. That foundational idea was first explored in a paper published by the Centre for Information Policy Leadership entitled “ Big Data Analytics: Seeking Foundations for Effective Privacy Guidance .” Paula Bruening and I were among the paper’s authors. The IAF asked Ms. Bruening to update that paper for the issues presented in 2019. The IAF is very pleased to release “ Advanced Data Analytic Processing – 2019 Update .” Please let us know what you think.
- Christopher Docksey – Keynote on Accountability At the 41st Conference of Data Protection and Privacy Commissioners 24 October 2019 in Tirana, Albania
Good morning. I would like to thank the International Conference and Commissioner Besnik Dervishi and his staff for inviting me here today and for their excellent welcome. 10 years ago, on 6 November 2009, this Conference embraced accountability in the Madrid Resolution on International Standards for the Protection of Privacy. This marked the culmination of a series of meetings lasting over a year, which started with the idea that international transfers of personal data could be facilitated by accountability. Of course this aspect is still a very important element of BCRs and the CBPR. However accountability was transformed over these discussions from a principle limited to international transfers into a self-standing general principle. As a result, Article 11 of the Madrid Resolution provides (paraphrasing) that the data controller shall actively develop compliance, and be able to demonstrate compliance to data subjects and to regulators. I would like to use this Keynote to show why this text, adopted, almost exactly ten years ago, is so important, and to discuss what accountability means, and how can it be achieved. To establish my credentials maybe I should start with my own experience of accountability. As head of the EDPS Secretariat I was both a regulator and the data controller. Of course I made sure that the EDPS was compliant , but I had to have a “Eureka” moment, or a “Damascus” moment, to learn that accountability means more than compliance. I was not in the bath like Archimedes, nor on the road to Damascus, like St Paul. I was on the train to Paris to give a talk to CPOs. I read a handbook on the train, which explained what accountability actually entails, and I understood, to my dismay, that we were compliant but not accountable. I actually thought “Uh oh, I hope they don’t ask me what we are doing to be accountable”! Fortunately, as you know, privacy professionals are caring persons, and I returned safely to Brussels. And then it took over two years to develop our accountability programme. The meaning of accountability Not enough people know what the principle of accountability means for delivering privacy and data protection. Richard Thomas tells us it is an amorphous word, a typical Anglo- Saxon word, derived from keeping accounts. Indeed in some languages there is a similar nuance of financial accountability or of paying the bill, for example Rechenschaftspflicht in German and rozliczalnosc in Polish. Many languages simply use the word ‘responsibility’, for example la responsabilité in French. But the word responsibility is too close to compliance on the one hand and to legal liability for non-compliance on the other. Accountability is different. The key elements of accountability can be found in the terms used in Colombia and Spain: la responsabilidad proactiva (Spain) – actively developing compliance – and la responsabildad demostrada ( Colombia ) – being able to demonstrate compliance. Put them together and you have accountability: la responsibildad proactiva y demostrada: actively developing, demonstrating, and being able to demonstrate, compliance In reality, accountability is a term of art – what matters is what accountability is , not what its name is. We should remember what Romeo thought about Juliet and her family name: “a rose by any other name would smell as sweet.” So if the word for accountability is not very helpful in your language, think of it as a rose: eminently desirable, but supported by sharp thorns. Accountability across the world If accountability is a rose, it has been flowering across the whole world. The original OECD Guidelines in 1980 include the term ‘accountability’. However they use the word only to mean compliance with legal obligations . As I realised on the train to Paris, compliance is different to accountability, simple compliance with legal requirements is not enough. Accountability is a fundamental shift in approach. In effect, it has moved data protection from an adjective to a verb . What we had before was an adjective, a definition of who was the person responsible Now we have a verb , an activity : what is the responsible person doing ? The first legislation on accountability in this sense was the Canadian PIPEDA in 2000. It was followed by the APEC Privacy Framework 2005 , which is implemented by an accountability mechanism in the APEC Cross Border Privacy Rules (‘CBPR’). Things really started moving from 2008-2009 onwards. In the Global Accountability Dialogue , a series of meetings also known as the Galway Project , regulators, organisations and individuals through civil society co-operated and exchanged ideas. As a result, the accountability principle was adopted in the Madrid Resolution in 2009 , and in the Article 29 Working Party Opinion on Accountability in 2010. In this Opinion the Working Party specifically asked the Commission to include an accountability clause in the future GDPR, and its inclusion in the GDPR represents a significant policy success for EU regulators. In 2012 Canada took the lead again: three Canadian Commissioners provided guidance on Getting Accountability Right with a Privacy Management Programme. This was a crucial step: without guidance, most organisations have no idea what to do in practice to be accountable. From then on guidance on accountability was developed by regulators across the world: in 2013 the Best Practice Guide in Hong Kong , in 2015 the Guide for the Implementation of the Principle of Accountability in Colombia and the Privacy Management Framework in Australia ,and in 2018 the Privacy Accountability and Compliance Framework in the Philippines and the Model AI Governance Framework in Singapore . Over the same period, an increasing number of countries and international organisations adopted accountability into their national laws. Mexico adopted accountability in the 2010 Law and explained it in the 2011 Regulations. The OECD updated their Guidelines in 2013, to include a new Part Three on Implementing Accountability, and in 2016 the EU enshrined the accountability principle in Article 24 GDPR . Peter Hustinx says that Article 24 is his favourite article of the GDPR, and is “arguably the most central provision of the Regulation.” In 2017, Guernsey – which has an existing “adequacy” finding from the EU – updated its data protection law in line with the GDPR, including the accountability principle. There is a lesson in this for the other existing “adequacy” countries, which would be well advised to include accountability in their updates too. Perhaps most importantly in this list, in 2018 the accountability principle was enshrined by the Council of Europe in Article 10 of Modernised Convention 108 . In Africa, The Ghana Commissioner, Patricia Adusei-Poku, has stressed at this conference that African regulators are looking to accountability and not merely compliance. And this year, Brazil adopted its GDPL, which also specifically includes the principle of accountability. This brief history illustrates two important points: First, accountability is a global standard. Colleagues across the world are used to the EU telling them that we are the “gold standard”, the “bees knees”, and everyone should respectfully listen to us. But we can’t say this about accountability. You can see from the timeline that the GDPR came late to the party. We are learning too. Second, accountability needs both legislation and explanation . It must be in the law – accountability is not self-regulation – and it has to be backed up by effective guidance. Marty Abrams has blogged that “the basic principle of accountability must be embraced by law with regulators able to generate enforceable guidance related to the principle”. The need to talk about accountability We need to talk about accountability today because there is a lot to be done. The GPEN 2018 Data Sweep looked at how well organisations have implemented the core concepts of accountability into their own internal privacy policies and programmes. It concluded that “organisations should be doing more to achieve privacy accountability”. Similarly the IAPP / EY 2018 report found that only 32% of organisations considered their program “mature” and noted that 56 % of organisations subject to the GDPR said they are far from compliance or will never comply. Accountability can help if it is seen as part of the solution , not as part of the problem. The new data protection legislation is invariably criticised as a step too far, imposing onerous obligations and procedures, and so on. But once controllers have understood what it is to be accountable, they will understand the need for the rest. The road to accountability One pragmatic way of understanding accountability is to see it as a toolbox, full of useful tools. If we take the GDPR as an example, we can see a number of these accountability mechanisms. They are not merely legal requirements, they also represent best practice: privacy by design and privacy by default records of processing activities security measures and data breach notification procedures DPO/CPO DPIA /PIA codes of conduct certification Too many people think that accountability, and these accountability mechanisms in the tool box, represent yet more legal obligations. However we should see them as helpful tools rather than as extra obligations , as part of the solution rather than the problem. And the accountable organisation that uses these tools will find that it has carried out the core of its various legal obligations. Another way of looking at accountability is as a philosophy : of being a responsible and ethical steward of personal information. There are various roads to enlightenment, to saying “Aha! I understand!” If you remember, I had my “Aha!” moment on the train to Paris. It can come to top management if they receive an effective message. Many senior managers realise that privacy, although it is an extra burden, is something that has to be done. Tim Cook was like that. A few years ago he was at the same meeting as Giovanni Buttarelli. He invited Giovanni to a short 15 minute meeting and asked him to explain “all this privacy stuff”. Giovanni responded by asking him whether he knew the name of his CPO. And they went on from there. Giovanni came out of that meeting over an hour later, and Tim Cook had his “Aha! moment. The path to enlightenment can also come from team members . Maybe by reminding managers that they are processing the personal information of fellow human beings. In Acxiom , the analytics team developed a model of ‘10,000 audience propensities’, which included scores for sensitive personal information such as ‘erectile dysfunction’ and ‘vaginal itch’. The leadership team was discussing whether the use of such scores would be too invasive, when one member of the team announced that she would be able to read the actual scores on these sensitive topics for each of the individuals in the room. Once confronted with this very personal information, the leadership team had their “Aha!” moment and understood that these types of scores were ‘too sensitive’ to be made available as a product to customers. This story shows how important colleagues can be for raising awareness, and it reminds us that modern data processing can be very, very personal, and that managers need to take it very, very personally. How to implement the principle of accountability In 2012, the Canadian Commissioners said that accountability was the “first” among the fair information principles. Why the first? Because it is “the means by which organisations are expected to give life to the rest of the data protection rules”. In 2009, the Galway Project identified five ‘common elements’ of accountability: Organisation commitment to accountability and adoption of internal policies Mechanisms to put privacy policies into effect, including tools, training and education Systems for internal, ongoing oversight and assurance reviews and external verification Transparency and mechanisms for individual participation Means for remediation and external enforcement I would like to concentrate on four key elements of accountability today. First and foremost, organisations must take responsibility for the personal data that they handle. This starts with ensuring top management commitment, taking data protection seriously, being honest, and managing risks. Top management must then ensure that managers and colleagues at all levels have to give their support – otherwise a fine-sounding privacy policy will be a hollow shell. Second, once there is that commitment, it is time to adopt a Privacy Management Program (PMP). It is not necessary to do everything at once, one can prioritise and handle the issues step by step. Accountability is a process, a responsibility that requires constant care and attention. Third, the organisation has to have a privacy professional, the DPO or CPO , the person or the team who will assure internal implementation of the PMP. In its 2010 Opinion on Accountability, the Article 29 Working Party stressed that the DPO is the ‘cornerstone of accountability’. This year, in the Stockholm Declaration , the Nordic data protection authorities recognised the importance of accountability and committed themselves to help ensure GDPR compliance by supporting DPOs in their important tasks. Finally, I would stress the need to ensure the transparency of the measures in the PMP, for data subjects, regulators and the public. Transparency goes to the heart of the concept of accountability. Sometimes it is not the processing that is the problem so much as the lack of transparency to users. For example, if Google, Amazon, Apple and Facebook had announced they wanted to make recordings of their smart assistants, and to use human beings to check those recordings for quality purposes; if they had set out a clear framework of what they wanted to do, surrounded by safeguards, and had called for volunteers: then we would not have had the scandals this summer, with newspaper articles on “Why are you snooping on me”? and “Alexa, are you invading my privacy?” The advantages of accountability Accountability offers clear benefits to both organisations processing personal information and to their regulators. For regulators , I would underline three reasons to encourage organisations to be accountable. First, demonstrated accountability can satisfy the due diligence obligation of the regulator. Under accountability laws, the first thing the regulator can do is ask to see the accountability records. These records, or their absence, make it possible to distinguish between accountable organisations and organisations that have no clear overview of their processing activities, thus enabling the regulator to prioritise its investigatory work on the latter. Second, accountability minimises over-reporting of data breaches. An accountable organisation will know when to notify and, more important, when not to notify. There is a huge difference between the percentage of data breaches that people assume should be notified (100%), and the percentage that actually have to be notified after good incident risk preparation and assessment (10%). Such knowledge represents a huge saving of effort for both regulators and organisations. Third, accountability can work as a bridge between jurisdictions. Andrea Jelinek has noted that accountability can help bridge jurisdictional and legal differences by creating interoperability. It can facilitate transnational investigations by providing a more uniform environment, based on mutually agreed or commonly accepted privacy and implementation standards. Equally it can also be a bridge for organisations: Paul Breitbart says it works like an electric converter plug, which fits in each jurisdiction, even if the exact legal requirements are different. However, as many regulators already know, this means a new type of work for regulators . They have to invest resources in accountability, be creative, and think how to help controllers understand. Many regulators of all sizes have already identified where support is needed. For example, in Guernsey the regulator organises popular “drop-in” sessions for controllers every other Wednesday morning. In Madrid the Spanish regulator has developed the Facilita software tool to help small and medium size enterprises deliver an adequate level of data protection. On this ten year anniversary of the Madrid Resolution, it is apppopriate that last Monday evening the staff of the Spanish DPA won this Conference’s Accountability award for developing this tool. So regulators have a lot of accountability work to do, to provide leadership, support and guidance. For organisations , I would underline four reasons to be accountable. First, accountability prepares for the known unknowns – Subject Access Requests, data breaches, complaints and investigations. The GPEN Data Sweep last year shows that there is a real need here, because a number of organisations had no processes in place to deal with the complaints and queries by data subjects, nor were they equipped to handle data security incidents appropriately. Second, accountability helps when the regulator calls , because there will be a documented privacy policy to show the regulator. Bojana Bellamy will tell you that regulators should take demonstrated accountability into account when carrying out investigations and enforcement. You can see this approach in the Singapore regulator’s Model AI Governance Framework, which states that, whilst adopting the voluntary Framework will not absolve organisations from compliance, it will help to demonstrate that they had implemented the necessary accountability-based practices. Indeed, legislation could even provide a Safe Harbor one day, as can be seen in the AIF Model Accountability Law, which provides that an accountable organisation that has satisfied the requirements of a PIA or a code of conduct should not be subject to civil penalties. Third, accountability can provide a competitive advantage. A strong privacy policy on the website means consumer trust and a strong reputation. The EDPS has argued strongly that accountable firms should gain a competitive advantage from being fully accountable. Fourth, accountability provides a methodology for dealing with the game-changer of Artificial Intelligence . The accountability toolbox is already available, it provides the tools to respect privacy and to develop AI at the same time. For example, risk assessment (an automated decision-making with legal or significant effects on data subjects will always trigger a DPIA under the GDPR), privacy by design and privacy by default (ensuring that meaningful human review will be designed in from the outset), and transparency (to provide information on the values that underpin automated decisions). Accountability when things go wrong Finally I should mention the disadvantages of not being accountable when things go wrong. The thorns on the stem of the rose. We should have no illusions, whatever can go wrong will go wrong. For most controllers, who want to do the right thing, accountability means preparing in advance, organising security, and putting all the necessary procedures into place. An accountable organisation, which has put in place robust programmes, is in a good place when things go wrong and the regulator calls. However if you fail to plan, you plan to fail, and when something goes wrong, you will be sanctioned, even fined: as Marriott and British Airways are finding out this year, courtesy of the ICO. Indeed, administrative fines should and will be used to support accountability. For example, the GDPR uses the same risk-based approach for both accountability and for fines: “risks of varying likelihood and severity for the rights and freedoms of natural persons.” If you are accountable, you will have taken these risks into account; if not, you risk being fined. Moreover the GDPR specifies that fines may be imposed for failure to implement the accountability mechanisms in the Toolbox. It is a mistake to assume that accountability tools are too abstract for fines: on 27 June 2019 the Romanian regulator fined UniCredit Bank the equivalent of € 130,000 for failure to implement Privacy by Design. Fines have a particular role for organisations that resist compliance or that merely pretend to be accountable. An organisation is not accountable if it hides behind consent, and says one thing in its PR and its privacy policy, but does something else in the research lab and on the website. Accountability is not about treating the risk of noncompliance as a business risk to be factored into turnover forecasts. Such organisations can be faced with three consequences in particular. First, fines have been calibrated for these organisations to be horizontal in scope and potentially very high. For example in the EU the GDPR has powerful, competition-level fines, first imposed by the CNIL this year against Google. Regulators in Germany have recently developed a model on fines set on the high side so as to be particularly dissuasive. Second, in addition to fines, such organisations can be subjected to enforced accountability. For example, the FTC has recently imposed significant fines on Equifax ($575 mn) and Facebook ($5 bn). We have learned that the members of the FTC disagree whether these high fines were sufficient and whether other remedies should have been imposed to address incentives and the business model itself. However it should be noted that in these two cases the FTC also imposed accountability mechanisms : the Equifax Board must obtain annual certification that it is complying with the FTC order, and the Facebook settlement imposes independent accountability mechanisms at all levels – a new independent Privacy Committee at Board level and Compliance Officers at operational level. Third, research on corporate psychology has shown that even high fines are not as persuasive as damage to the business . Companies can absorb even high fines as costs of doing business, but they do care about making profits, and If their reputation suffers, it can harm their sales. Finally, if an organisation ends up in court, it is increasingly likely that it will be held to account. Courts across the world are becoming more sensitive to enforcing privacy and data protection. In Riley v California , 2014, the U.S. Supreme Court warned that “privacy has a cost.” In Puttaswamy v India, 2017, the Indian Supreme Court emphasized that “Privacy is the constitutional core of human dignity”. In this ruling the Supreme Court insisted that India should develop a “robust regime” of data protection, including, specifically, accountability, and indeed accountability can be found in Section 11 of the ensuing Indian Data Protection Bill. In the EU , the case law of the Court of Justice since Google Spain , 2013 has deliberately applied the data protection rules as broadly as possible in order to ensure “effective and complete protection of the persons concerned.” This time last year, at the 40th International Conference in Brussels, President Koen Lenaerts said that the Court of Justice is attached to ‘high levels of accountability’ of individuals that process personal data, in light of the ‘central theme’ of accountability in the GDPR. It is worth looking at the recent EU rulings on transparency, tracking and consent in Wirtschaftsakademie , Fashion ID and Planet 49 . These rulings may well mark a tipping point for the present economic model of private surveillance. In conclusion , a decade on from Madrid, a lot has been achieved, but there is still much work ahead. Accountability has been established as a world-wide principle , on the move across the globe. I hope we will see more and more legislation on accountability and on increased powers for regulators. Accountability is, according to Liz Denham, “crucial, crucial” for protecting personal data in the digital age. It requires organisations to be responsible, to understand the risks their data processing creates and to mitigate those risks, and it weaves data protection into their cultural and business fabric. Accountability empowers regulators , but they have to work at breathing life into what it means. Finally, an accountable organisation will develop naturally towards “Accountability 2.0”. This is about more than avoiding risk to customers. It is responsive , creating value for individuals and society as well as for organisations; it is transparent about what it is doing, and why, and it is ethical, because data controllers are aware they are processing the personal information of fellow human beings. Looking forward to Accountability 2.0, I would like to conclude with a quote from Giovanni Buttarelli, at the International Conference last year: “Not everything that is legally compliant and technically feasible is morally sustainable”. Thank you.



















