Search Results
95 results found with an empty search
- IAF Releases U.S. Privacy Framework Discussion Document
Draft 6 08282018 0800 The time is right to discuss an updated privacy framework for the United States that maintains the ability to think and learn from data while also protecting individuals in a highly observational digital ecosystem. Respected voices from all sides of the privacy debate are seeing the unintended consequences, from controls that are not keeping up with the fast-moving technology environment to the norming on European General Data Protection Regulation concepts like actionable accountability and legitimate data uses (Brazil being the latest to do so). The IAF’s and Global Accountability Dialogue’s work on accountability, ethics, advanced analytics and AI have informed 12 principles for a U.S. privacy framework. The principles are not legislative language but rather key concepts that may be translated into statutory language. The 12 principles are divided into two parts: four for individual rights and eight for accountability. The principles are intended to encourage innovation while being interoperable with other regimes. These principles, in many ways, have been designed to deal with the extreme poles in the privacy debate. On one hand, there is a sense that data use has become so toxic that only extreme individual control can protect us from hidden discrimination and manipulation beyond civil bounds. On the other hand, there is the view that data is the driver of the fourth industrial revolution and anything beyond the most minimal restrictions on observation, calculation, and sharing will place too many limits on innovation. These principles are a recognition that we live in a sensor rich society. Data must be used to innovate in medicine, transportation, safety, education, product development, and other sciences. They are also a recognition that individual space is important, even in a sensor rich environment, but that individual control does not match the complexity of today’s and tomorrow’s ecosystems. Consent is important but not enough. Instead, the principles recognize that thinking and learning with data is basic to mankind’s progress and that these learnings must be understood and applied in an ethical manner. This recognition requires organizations to be transparent about values, to use their values when driving innovation, and to make sure that people are the end and not the means through internal review and standing ready to demonstrate to thoughtful authorities. The principles should be seen as pieces of a whole and not elements that stand alone. However, unlike other principles, they are not meant to be linear. The principles take priority not based on their numbers but rather based on the relevant facts as data is created and used. This framework is anything but final. The IAF will be evaluating the principles in late September at its West Coast Summit. If you have comments please send them to me at mabrams@informationaccountability.org . The framework is below: Fair Processing Principles to Facilitate Privacy, Prosperity and Progress The information ecosystem in the United States is the world’s most innovative. It has not just driven economic growth, it has facilitated positive changes in all sectors. At the same time, high levels of observation along with advanced analytics have increased angst in individuals and a sense that they may be harmed by the misuse of information from them or about them. To further the discussion about a U.S. privacy regime, the Information Accountability Foundation (“IAF”) puts forth these principles for a U.S. privacy framework. The framework is intended to: preserve America’s innovation engine, be interoperable with other new and emerging privacy regimes, protect individuals’ interests in privacy, and protect all the benefits of the 21st century information age. While interoperable with other regimes, this framework is American in its vision and structure and is divided into two parts. The first part describes the rights necessary for individuals to function with confidence in our data driven world. The second part is focused on the obligations that organizations must honor to process and use data in a legitimate and responsible manner. While the framework outlines principles, in some cases it includes means and outcomes to better illustrate a particular principle. Individual Rights Transparency Individuals have the right to be free from secret processing of data that pertains to or will have an impact on them. Organizations should provide understandable statements about their data collection, creation, use and disclosure practices and about their policies and governance. Those statements should be directed at enforcement agencies, but they should also be publicly available. Organizations should also provide summaries and other means that make their data collection, creation, use and disclosure practices understandable to individuals. Access and Redress As a validation there is no secret collection, creation, use or disclosure taking place and confirmation of adequate data accuracy, individuals have the right to obtain the data they provided, to understand what observational data is created by the organization that pertains to them, and to be told what types of data are inferred by analytical algorithms. Because intellectual property rights may prevent individuals from having the right the right to request disclosure of inferences made by the organization, and where inferences such as scores potentially have negative consequences for individuals, organizations should provide relevant explanations about their processing, appropriate opportunities for feedback, and the ability for individuals to dispute such processing. Engagement and Consent Individuals have the right to know about data uses that are highly consequential to them, and to control those uses through an appropriate level of consent. Individuals also have the right to know that data is disclosed to third-parties beyond the context of the relationship, to request such disclosure not take place, to prohibit solicitations, and to challenge that a data use is not being undertaken in an accountable manner. Individuals have the right to object if they believe that the data about them is inaccurate or being used out of context, is not being undertaken in an accountable manner, or if they believe that uses of data are not legitimate. The right to object to processing does not pertain where data processing and use are permitted by law. Where highly consequential uses, such as health, financial standing, employment, housing and education, are governed by specific laws, those laws take priority. Beneficial Purposes Individuals have the right to expect that organizations will process data that pertains to them in a manner that creates benefits for the individual, or if not for the individual, for a broader community of people. They also have the right to expect that data will not just serve the interests of the organization that collected the data. There may be times when objective processing does not serve the needs of each individual, but such processing does serve the broader needs of society. When this is the case, individuals may request an explanation of how processing is beneficial to the broader group. This explanation should be part of understandable summaries required under the Transparency Principle. Where there are negative consequences to individuals, individuals should expect an explanation of the results and the ability to dispute the findings, as provided in the Access and Redress Principle. Accountable Data Stewardship Assessed and Mitigated Impacts All collection, creating, use and disclosure of data should be compliant with all applicable laws, industry codes, and internal policies and practices, and should be subject to privacy, security and fair processing by design. Employees should receive appropriate training for their specified roles, and accountable employees should be identified to oversee privacy, security and fair processing obligations. Specifically, fair processing assessments should identify individuals and groups of individuals who are impacted, both negatively and positively, by the processing, and should guard against identifiable negative consequences. Where there are negative consequences, organizations should mitigate those consequences to the degree possible. If unacceptable consequences still persist for some individuals or groups, the organization should document why the benefits to other individuals, groups and companies are not outweighed by the unacceptable consequences. Secure Data should be kept secure at a level that is appropriate for the data. In Context Data should be collected, created, used and disclosed within the context of the relationship between the individuals to whom the data pertains and the organization, based on the reasonable expectations of individuals as a group. Public safety, security and fraud prevention are considered within context. Legitimate Uses Data should be processed only for legitimate uses that have been disclosed or are in the context of those uses, and only the data necessary for those uses should be collected, created, used or disclosed. When the data is no longer necessary for these uses, it should not be retained in an identifiable manner. Legitimate uses include the following: Where individuals have provided informed consent; Freely thinking and learning with data by organizations that demonstrate effective accountability, consistent with the societal objective of encouraging data driven innovation, and that honor the Onward Disclosure Responsibility Principle. Uses that create definable benefits for individuals, groups, organizations and society that are not counterbalanced by negative consequences to others, and that are based on assessments established by external criteria. Designated public purposes, including public safety and the identification and prevention of fraud, and in response to an appropriate legal request; Organizations that stand ready to demonstrate why they believe other uses not listed here that are based on assessments established by external criteria are legitimate; Where permitted by law. Accurate Data should be accurate and appropriate for all legitimate uses, and that level of accuracy should be maintained throughout the life of the data. Onward Responsibility Organizations that originate data should be responsible for assuring the obligations initially associated with the data are maintained when the data is disclosed to third parties. All further onward transfers should also maintain those obligations. Oversight Organizations should monitor all uses of data to ascertain that the uses are legitimate, the data is processed fairly, the data is accurately used within the context of the relationship with those to whom the data pertains, and processes that support individual rights and accountable data stewardship are effective and tested. The oversight process, whether conducted by an internal body or an external agent, should be separate from and independent of those persons associated with the processing. Oversight Organizations should monitor all uses of data to ascertain that the uses are legitimate, the data is processed fairly, the data is accurately used within the context of the relationship with those to whom the data pertains, and processes that support individual rights and accountable data stewardship are effective and tested. The oversight process, whether conducted by an internal body or an external agent, should be separate from and independent of those persons associated with the processing.
- More Comprehensive US Privacy Laws are Inevitable — What Do We Want them to Be?
“Data should serve people, people should not serve data.” With those words Giovanni Buttarrelli summarized the intersection of privacy and ethics. Privacy law and regulatory approaches are best when they don’t frame privacy as a conflict between users of data and those that are impacted by the processing. This model is being increasingly challenged with regulatory guidance issued as a complement to the GDPR. Privacy law is best when it forces organizations to be responsible and demonstrate the details of how they are responsible. Core accountability concepts such as privacy by design, ethics by design, and balancing that recognizes the full range of all stakeholders’ rights and interests are the future of data stewardship governance. The new California law, AB 375 never once uses the term accountability. And while the new California law and the GDPR makes more likely U.S. comprehensive privacy legislation, it isn’t clear that either the California law or the GDPR and specifically the regulatory guidance response creates a path that will assure the full range interests that will enable data driven innovation so key to economic growth. Neither the GDPR, which is very EU constitutionally centric, or California Assembly Bill 375 is the answer for balanced protection and data driven innovation.The California law is an update to the American legacy notice and choice approach. There are new rights to consumer access to their data, and a delineation of data sales versus data driven fulfillment. There are special protections for children. The broad application of what data is covered and which consumers are protected adds to its sweeping impact. And the law will be costly, requiring system changes to facilitate meeting the law’s consumer requirements. But the law isn’t next generation and will likely dampen data driven innovation as organizations seek to minimize their risk exposure. While this may ultimately drive to US Federal Legislation, the larger risk is a fragmented “me too” like laws in other States. Since the early 1970’s the U.S. legacy system has been on a path to split the two parts of privacy, Autonomy and Fair Processing. Autonomy has been served by notice and the ability to object, while fair processing has been the product of sector specific laws. There are new requirements in the California law relative to autonomy and consumer control, but not a novel approach. We now will have an access requirement in the U.S. that goes beyond healthcare and credit reporting. Furthermore, the law provides consumers with the right to understand how data markets work, and prohibit the sale of the data that pertains to them. However, there is no requirement that transparency be effective in helping individuals understand how data is being used. Digital markets are hard to understand, and advanced analytics systems even more so. The IAF has conducted numerous workshops on how digital markets work to link and target, and education of these complex environments requires time and the best teachers available. We are not suggesting the new law isn’t rigorous and expensive, but not ground breaking. It is not progressive and not suitable to enable data driven innovation that economies are increasingly dependent on. The resulting compliance systems will be costly and the organizational compliance staffing required will be significantly greater than today, but will data use be fairer for all stakeholders? In fact, there is a very real possibility beneficial uses, even those that do not directly affect a specific individual could be caught up in this process making companies think twice about the cost of this sort of data use. The California law doesn’t require fair processing. Instead it requires data use be transparent with the ability for individuals to exercise rights to limiting some data use. This is necessary but not sufficient. Yet responsible and accountable data processing is increasingly the backbone of modern fair data protection. The GDPR has specific individual protections, but it also has legitimate interests, privacy by design, a balancing of interests, and the potential for codes of conduct for assuring that stakeholders get protections when using data in an innovative manner that creates real value. Despite its flaws, the GDPR creates a better infrastructure for accountability. Canada may be on the brink of revising its private sector law at the federal level. While consent and accountability are both built into the fabric of Canadian privacy it too needs to reflect today’s more intensive data driven scenarios to facilitate the benefits of this data use while effectively ensuring individuals are at the center. However, their law and tradition creates room for data uses that are responsible and in context. The concept of “beneficial applications” has been suggested in Canada as a means for creating real social value from data while achieving an appropriate level of protection. In Hong Kong work commissioned by the Privacy Commissioner, the IAF is working on ethical data stewardship models and ethical assessment guidance for big data and advanced analytics. Singapore will revise its digital legal infrastructure in a fashion that is compatible with modern accountability. Accountability is also a feature on the newer privacy laws in Latin America, and will be part of the next generation of laws there as well. The California legislation with its focus on the sale (broadly defined) of data will have impact not only in the other 49 states, but in other locations as well. Limiting the sale of data is an easier and a more attractive concept for policy makers to understand than accountable and enforceable data stewardship. The resulting risk is that limits on data sales will capture the soul of future legislation in places like Canada. But, if our goal is to assure data serves people rather than people serving data, we have work to do. The concept of responsible data use and in particular “beneficial activities” needs to be explored, and processes developed to demonstrate that activities are truly beneficial. Ethical assessment frameworks need to be developed and socialized. Trustworthy oversight mechanisms that fill a trust gap that goes beyond what regulators might have resources to provide need to be developed, tested and discussed. This research needs to be done and done quickly. Other states will likely follow the California lead, but with what model? Consumer power, as captured by AB 375 is important and transparency is a necessary prerequisite, however these alone are not enough. When legislation is drafted, those of us that believe that data should enhance innovation need to be ready with solid research and examples. We can’t and should not back away from data intensive activities that involve advanced analytics and artificial intelligence. Being people centered means creating value for people. We need to create economic and social value while preserving a space free from digital predestination. We need to be ready with accountable ethical assessment tools and oversight models that are trusted. We need means to use data for beneficial purposes from data across platforms for uses never anticipated. These are the themes the IAF is and will continue to explore over the next 18 months. We need others to join us in this quest for something better.
- It Is Time to Rethink the Governance of Observational Data
As long as there has been data, there has been observational data. The whole basis of science is running experiments and recording the results, or “observed data.” Just as sensors have accelerated mankind’s ability to understand nature, the emergence of the Internet has made recordable observation of individuals, both identified and not, more feasible. Smart phones, as the chronicler of our lives, have accelerated a trend toward the recording of observational data that is furthered by the Internet of Things. Marketing’s purpose is to drive demand. It is not surprising that the acceleration of observed data has been adapted to fuel demand by identifying those most likely to buy. It also is not surprising that observational data is used to measure the effectiveness of the advertising used to inform individuals making buying decisions. So, over a generation, advertising, supported by observational data, has become the primary means of supporting digital media. Policymakers made explicit choices that made that possible. I sat in the conference rooms where those policy choices were made. As a result, businesses were built, explicitly and implicitly, on those policy decisions. During this century, there have been policy attempts to limit online observation. The purposes of most of those policy initiatives have been to make online tracking more visible and to give people options to limit that tracking. However, as visible as the notifications were, they never quite captured the numbers of parties that watched activity to trigger demand and measure the effectiveness of advertising. In the end, they just measured advertising and marketing. Well, manipulation of elections seems to be a greater angst trigger than online tracking for marketing purposes. Moreover, the micro-targeting of segments of the population with misinformation seems to be a bigger trigger than ever. The strange truth is that micro-targeting has been used by political parties for decades. It is the ability to so effectively deliver misinformation through the use of observational data that alarms people. The granular segmentation of media seems to be as much at fault as the election campaign advertising delivered to the target audience itself.But what this confluence of events has done is put observation of people at the policy target’s center. UK Information Commissioner Elizabeth Denham posted the following: The major newspapers on both sides of the Atlantic have been focused on the observation. The latest was a Sunday New York Times story on how the arcane world of patent applications can reveal by what means data from interactive speakers might be used in the future. Indicative of the interest is the running of same story on the NBC’s TODAY Show. These journalistic events take us back to the fact that observation has always been essential for research. As an anthropologist, I was taught to watch and record. A colleague recently used this type of old fashion observation using human eyes, paper and a pen to design improvements in the patient experience. Furthermore, observation is essential for how things will work going forward. Smart cars need to watch to be smart and need to record to get better. Medical devices will be linked together and with smart medicines and personal sensors. Education will be improved through a better sense of how to personalize teaching, fed by observation. Lastly, audiences like advertisement supported entertainment and news. So, blunt bright line rules will not work, even though some proposed legislation would do just that. Therefore, the IAF will kick off a project on principles for observation as part of focused data stewardship at a summit this Summer. Initial results will be published in time for the for the International Conference of Data Protection and Privacy Commissioners. It is time for a discussion on how observation can occur while still being fair.
- Convergence Versus Divergence in Global Data Flows
Talk of trade wars makes the importance of data driven innovation even more pressing. Trade agreements affect not just tangible assets, such as steel and aluminum, but also assets, such as data. Yet the basic conceptualization of the role of privacy and innovating with data in society divides much of the world. Those interested in understanding the divide might want to look at a paper included in the Future of Privacy Forum’s “Paper for Policymakers” program entitled “ Transatlantic Data Privacy Law ” written by law professors Paul M. Schwartz and Karl-Nikolaus Peifer. The paper discusses the critical differences and similarities between privacy in the United States and privacy and data protection in the European Union. The most pressing difference is the preeminence of privacy as a constitutional right (among other fundamental rights) that defines European citizenship versus the preeminence of free expression in the United States. The EU General Data Protection Regulation (GDPR) guidance, particularly the guidance on profiling and automated decision making, truly illustrate those differences. Schwartz and Peifer explain the role of constitutionalizing fundamental rights in the Lisbon Treaty in establishing European versus national citizenship. The GDPR cements data protection’s key role. Keep in mind that the GDPR is being implemented as other forces, such as elections in Germany and Italy, further European skepticism. In this context, the focus on autonomy as an aspect of dignity makes a great deal of sense, but it does so at the expense of the role data plays in enhancing a modern economy and society. As data protection and privacy are being further institutionalized as a key component of European citizenship, the United States is going through a period of deregulation where remedies are increasingly based on demonstrable harms. Furthermore, the U.S. appetite to think with data without restraint accelerates.The paper’s final section discusses convergence, divergence and new institutions. According to Schwartz and Peifer, the forces for convergence are “shared technological environment, increased political agreement around the benefits of personal data flow, and common security and law enforcement concerns.” As for divergence, the authors write: “Of greater significance, in our view, are the different conceptions of legal identity in the two systems. In the EU, rights talk seeks to create a new political identity, that of the European citizen. . . . No similar constitutional interest exists in the United States, and no incentive is present to encourage expansion of the limited privacy rights that do exist.” My sense is that over time the shared technological environment (and the data associated with it) will be a driver of convergence not only between Europe and the United States but also globally. For that to occur, there needs to be a bridging mechanism. I believe that bridging mechanism is a concept “socially beneficial activities” as an exception to consent captured in the Office of the Privacy Commissioner of Canada’s consent report. The IAF has committed its resources to making “socially beneficial activities” a trusted processing based on assessments that demonstrate that such processing is legal, fair and just.
- Key Infrastructure Issues Central to Next Generation Privacy Legislation
A comprehensive privacy law is again being considered in the United States, with numerous draft bills out for discussion. Key elements of any privacy law are how it will be enforced, what will be enforced and by whom it will be enforced. The current approach to privacy legislation in the United States is described in various ways, such as a patchwork of sectoral and specific processing law and a general prohibition of unfair and deceptive practice. No matter how U.S. privacy law is described, many different actors have a role in enforcement. Addressing the regulatory aspect of any new U.S. privacy legislation requires an understanding of the difference between the way the U.S. administers privacy as a consumer interest and how the balance of the world administers data protection. It is inevitable that the proposed bills will be compared with laws in other jurisdictions, such as the European General Data Protection Regulation. However, European experience with data protection oversight began nearly 50 years ago with laws in Hesse and Sweden. Other countries are also considering privacy legislation to facilitate data driven economies, deal with perceived risks to people and institutions, and/or seek adequacy from the European Union. Some have agencies similar to Europe; others have authorities that are slightly different. New privacy laws, particularly in the U.S., raise basic structural questions that are better discussed sooner rather than later. This blog’s purpose is to set up some key questions to advance this discussion and as a pre-cursor to a small table discussion on these issues on the margins of the FTC privacy hearings when they take place. To advance the discussion, there are at least four key questions: (1) what is the correct focus for legislation relative to privacy ; (2 )how should the full range of tasks related to oversight and enforcement be approached; (3) should enforcement include ex ante versus ex post processes or both; (4) what type and amount of resources would a regulator or regulators need? Is Privacy the correct focus for legislations and, if so, what is Privacy? It may seem silly to question if privacy is the correct focus for privacy legislation. Privacy is a bundle of interests that are driven by both substantive as well as emotional issues. Being unfairly denied credit and ubiquitous Wi-Fi monitoring in stores are both part of privacy, but these issues are addressed with fundamentally different legislative solutions today. After 30 plus years in the privacy field, I have reached the conclusion that I cannot propose solutions to the emotional side of privacy issues as it is too broad. However, I do believe there are solutions to issues related to lost seclusion in a very observational world, maintenance of autonomy on how I am portrayed, and achievement of fair processing of data about me. So, prior to enacting comprehensive privacy legislation, the sub issues related to seclusion, autonomy and fair processing should be isolated since any enforcement model will depend on this foundation. Also, the right remedies should be linked to the problem for which a solution is sought. For example, an autonomy solution should not be applied to a fair processing question, or a fair processing solution should not be applied to what is considered to be a seclusion problem. Should all tasks related to individual autonomy, seclusions and fair processing be concentrated in one agency or distributed among numerous agencies and offices? The U.S. has approached privacy as a consumer protection issue with enforcement delegated to various sectoral regulators whose master charge is not the complete bundle of autonomy, seclusion and fair processing. The Federal Trade Commission is responsible for fair markets, Health and Human Services is responsible for health, and Transportation is responsible for safe roads, rails and skies. All have responsibility for privacy, and none is a privacy agency. Law enforcement agencies, like the FTC, do what they are charged with doing, policing consumer protection law that explicitly or implicitly relates to data creation or use. Other agencies, with a broader mandate to a specific industry, balance data issues with the other regulatory challenges they face. Most agencies are not typically charged with protecting individuals in their roles beyond consumer (an exception is HHS which is charged with protecting individuals in their role as patients). Some countries have privacy agencies that are charged with protecting primarily individuals’ autonomy and that use autonomy as a means of achieving fair processing. Other countries have data protection agencies with various powers to interpret the law, provide guidance based on interpretations, accept complaints, spot check the market, investigate legal breaches, charge those parties that may have broken the law, make a determination of guilt, and punish. At least 16 civil society organizations in the U.S. have suggested the U.S. should have such an agency. Should all those powers be placed in one agency? Or is fair processing best done close to where the contextual knowledge exists? A senior public official in another country has determined that putting that much power into one agency creates conflicts between the various roles. Some have suggested the FTC’s role should be expanded beyond its law enforcement mission. If so, which of these roles should be given to the FTC? The IAF believes that all these roles are important but believes a moment should be taken to define them, weigh them, and determine in which agency they should be placed or if a new agency should be created in the U.S. Ex Ante Oversight Versus Ex Post Enforcement European data protection includes both ex ante oversight processes as well as ex post enforcement. Two examples of ex ante processes are Binding Corporate Rules approvals and processing approvals where data impact assessments show there is residual risk. In the U.S., there are pre-approval processes related to Privacy Shield and APEC Cross Border Privacy Rules, but the reviews are not conducted by regulatory agencies. All privacy enforcement agencies across the world have some authority to review industry practices. For example, the FTC conducted a study on data brokers to better understand the industry’s behavior, and data protection authorities, such as the French CNIL and the Information Commissioner’s Office in the United Kingdom, have the authority to conduct spot audits. Increasingly, there is a sense that formal spot checking is necessary for accountability processes, such as the ones put forward by the Information Accountability Foundation. That is why the IAF has suggested companies stand ready to demonstrate what they do with data and why. The IAF has suggested, for example, that codes of conduct will become increasingly important. Codes and certification raise specific ex ante oversight issues. Should codes be approved by a regulatory agency that would then enforce them? Or might approval be delegated to a third-party accountability agency? If an accountability agent, how would that accountability agent be overseen? What Type and Amount of Privacy Resources? The UK’s ICO has a staff of at least 500 people, with at least 250 involved in data protection. This serves a population of 60-million. If one were to extrapolate to the U.S., that has the most innovative data users in the world and a population of 350-million people, U.S. governmental agencies would employ 1326 privacy staff. The FTC has publicly stated that its privacy division has a staff of 34. There are also staff in enforcement and in the commissioners’ offices that are also doing privacy so the FTC might have 50 privacy staff. There are also privacy enforcement staff at HHS, FCC, banking agencies and at the state attorneys general. My guess is that the total staff dedicated to privacy in the U.S. does not reach anywhere near 1326. It has been suggested that the very active plaintiff’s bar in the U.S. is an effective supplement to enforcement agencies. The plaintiff’s bar has been involved in Fair Credit Reporting enforcement since day one. Reviews of that form of enforcement are mixed. However, it is clear the corporate community is not typically enamored with enforcement through private litigation. What is clear is that there is a gap between what the U.S. is paying for enforcement and oversight today, and what will be required by new legislation. This gap needs to be addressed before the U.S. enacts privacy legislation. There will be a great deal of debate on what new rules should be for achieving appropriate seclusion, autonomy and fair processing. The IAF will actively encourage and conduct research on these four key legal infrastructure issues so they are part of the public debate. Please join us in this endeavor.
- Are GDPR Guidelines Becoming So Complex They May Overwhelm Businesses Ability to Meet Them?
Authored by Lynn Goldstein and Peter Cullen Last December the European Union’s Article 29 Data Protection Working Party (Working Party) issued draft guidance relating to two key aspects of the General Data Protection Regulation (GDPR) addressing Consent and Transparency, both essential to the effective operation of the GDPR. The Working Party invited comments, and the Information Accountability Foundation (IAF) responded to both the Consent and Transparency drafts (collectively and singly Draft Guidance). In short, the IAF believes there are some significant challenges related to the Draft Guidance that may have the unintended impact of limiting the beneficial uses of data and potentially limiting the longer-term goals of providing data protection against the full rights and interests of individuals as the beneficial uses of data grow. IAF’s comments on the Draft Guidance broke down into two main themes: The Draft Guidance has an apparent narrowing of some of the plain language and flexibility contained in the GDPR text The complexity of the Draft Guidance will be challenging for even the most sophisticated and resourced organizations to meet The GDPR is part of the European Union strategy for a Digital Single Market that is an engine for employment and economic growth. The GDPR is intended to create both legal certainty and a platform for the free flow of data in a suitably protected manner. This strategy is responsive to all the stakeholder rights and interests articulated in the treaties that have established the European Union. There are various provisions in the GDPR that are intended to create flexibility for discovering new knowledge, including new and better means for achieving stakeholder objectives. There are examples of “narrowing” in the Draft Guidance, particularly related to Scientific Research. For example, the Draft Guidance quotes GDPR Recital 159 that states “For the purposes of this Regulation, the processing of personal data for scientific research purposes should be interpreted in a broad manner.” But the Draft Guidance then goes on to say, “however the WP29 considers the notion may not be stretched beyond its common meaning and understanding that ‘scientific research’ in this context means a research project set up in accordance with relevant sector-related methodology and ethical standards.” The GDPR words “should be interpreted in a broad manner” links to the full range of fundamental rights and interests articulated by the various European Union treaties and the European Union strategy for a Digital Single Market that is an engine for employment and economic growth. The Draft Guidance on Transparency focuses on the need for organisations to be very specific in stating all the purposes and the legal basis for all such purposes. In research, new processing may develop over time that is not inconsistent with the original purposes. The GDPR was not intended to stifle innovation through work flow improvements that are not inconsistent. The IAF is concerned that the Draft Guidance may create disincentives for knowledge creation related to new processing that is not incompatible with the initial processing. In short, the flexibility built into the GDPR for research and related activities should not be prematurely limited by the Draft Guidance. The second thematic area of concern relates to the complexity of the Draft Guidance. For example, the Draft Guidance on Transparency identifies a basic conundrum associated with the challenge of making transparency simple and concise on the one hand and complete on the other hand. The conundrum lies not in the objectives for transparency but rather in the details deemed necessary to achieve those objectives. The Draft Guidance includes a table with 14 numerous factors necessary for compliant transparency. A table with 14 factors seems contradictory to concise and simple. To meet the Draft Guidance expectations, organisations of all sizes and complexity will need skills, resourcing and differing capabilities and capacity to achieve the preferred transparency. For example: Communications specialists with expertise in data protection to describe data processing activities and user rights, in simple age- and consumer-appropriate language; Consumer research staff to test timing and efficacy of language and transparency delivery including multi-language translations; Experienced designers and programmers to create the needed online and in-product experiences, product flow and visual design that are ‘just-in-time’ or to describe further data processing activities when they arise. Experience from businesses that have complex relationships with customers that are exercised primarily online is that developing an approach that requires numerous notifications and separate consents requires a large cross functional team of product developers, designers, usability testers, data protection experts and lawyers. It will be equally critical for organisations to put into place new business processes to ensure consistency across the recommended communications channels recommended by the Draft Guidance. A limited number of organisations have these skills in place, but most do not. Putting such resources in place will require a substantial investment that needs to be balanced against other expectations, with the knowledge that only the most motivated individuals will have the time to absorb the communications. These cross functional teams are the exception, not the rule, at many organisations. This staffing approach means the resources required by organisations to execute on the Draft Guidance will be a challenge for most and certainly for smaller ones. In addition to the narrowing of the GDPR’s intent and the complexity challenge, in IAF’s view, Draft Guidance should not inadvertently become secondary regulation. It should, however, provide a commentary on legal requirements mandated by the GDPR that go into effect in May. Guidance should provide an interpretive view on the objectives of the GDPR and how best to meet both the letter and spirit of the GDPR. IAF’s feedback on the Draft Guidance was to take care not to over engineer these requirements to the point that they may be quite challenging for organizations to implement and negate a key part of the European Union strategy for a Digital Single Market that is an engine for employment and economic growth.
- Guidance and Un-Legislated Law
In 2016 and 2017, the Article 29 Data Protection Working Party (WP29) adopted Action Plans which set forth its global implementation strategy related to the General Data Protection Regulation (GDPR). Pursuant to these Action Plans, the WP29 has produced seven Guidelines and has indicated it will produce at least eight more. As the data protection community digests the massive amount of guidance coming from both the WP 29 and the individual data protection authorities, the impact of this guidance is not new; for better than thirty years, regulator guidance has changed markets. What has changed is the velocity, descriptiveness and volume of this guidance and by extension its impact on markets. One of the most consequential privacy decisions made was by the U.S. Federal Trade Commission in the 1980’s when it issued an unofficial staff opinion that credit prescreening was permissible because the minimal invasion of privacy was counter-balanced by the consumer benefit of competition in credit card markets. That decision, pursuant to the federal Fair Credit Reporting Act, while specific to pre-approved credit offers, unleashed the power of data driven advertising in the United States by creating an example about how data driven marketing can be effective. The broad and deep credit files of Americans were used to granularly segment markets. By 2001, one bank executive told me that his 60terabyte prospecting file made it possible to pick the right credit product, out of the 6,000 he offered, for each consumer in the United States. The key concepts in credit marketing were soon expanded to other markets as well. With the introduction of a consumer Internet in 1993, digital marketing soon had the rich overlay of highly granular data to segment markets in a fraction of a second. However, the whole movement began with that FTC opinion issued back in the 1980’s that balanced minimal privacy invasion against much more competitive credit markets. Today, as we rush towards GDPR implementation, there is more and more guidance, and we should be cognizant of the impact of that guidance on how organisations will operationalize data protection law requirements. Moreover, regulatory guidance is not just a European issue. The FTC has been holding workshops and issuing reports since the late 1990’s. Those reports have had real impact on data use. The recent Canadian Office of the Privacy Commissioner report on its consent consultation has business guidance that will impact privacy practices well before recommendations are or are not enacted into law. Recent guidance on data transfers from the Superintendent of Industry and Commerce in Colombia has caused a vigorous debate among all stakeholders. The IAF as a research organisation focused on the future finds some guidance very helpful and finds other guidance backward focused. Our views on the WP 29 draft guidance on profiling and automated decision-making can be found here . To date, the IAF, as an organisation, has not stepped back to actually ask hard questions about the role of guidance in shaping the legal, fair and just use of data at a time where technology continually changes the way data is created, collected, and used. In January that focus will change. The IAF will hold a policy call with its stakeholders to begin exploring the role of guidance and the role of policy centers in responding to that guidance. That call will be followed by a brainstorming session in the Spring of 2018. As always, your comments are welcome.
- The Need for An Ethical Framework
The vast amount of data made possible and accessible through today’s information technologies, and the ever-increasing analytical capabilities of this data, are unlocking tremendous insights that are enabling new solutions to health challenges, business models, personalization and benefits to individuals and society. At the same time, new risks to individuals can be created. Against this backdrop, policy makers and regulators are wrestling with how to apply current policy models to achieve the dual objectives of data protection and beneficial uses of data. One of the market pressures emerging is a call to not only have data processed in a legal manner but also in a fair and just manner. The word “ethics” or the phrase “ethical data processing” is in vogue. Yet, today, we lack a common framework to decide both what might be considered ethical but as important how an ethical approach would be implemented. In their article relative to building trust , Jack Balkin and Jonathan Zittrain posit: “To protect individual privacy rights, we’ve developed the idea of “information fiduciaries.” In the law, a fiduciary is a person or business with an obligation to act in a trustworthy manner in the interest of another. However, what would be acting in a trustworthy fashion look like? It is an interesting approach that also illustrates the translation over time of ethical models into not just law but also commonly accepted practice (e.g., doctors and the Hippocratic oath). So, while we do not have a lingua franca, privacy and data protection enforcement agencies are increasingly asking companies to understand the ethical issues that are raised, for individuals, groups of individuals or the broader society, when complex data processing takes place. These issues go beyond explicit legal and contractual requirements to the question of whether processing is fair and just, not just legal. Translating ethical norms and values based uses of data into internal policies, mechanisms, and obligations that can be codified into operational guidance and processes, and people-centered thinking, is a challenge for many organizations. Even before this translation stage, it is key to recognize the word “ethics” or the phrase “ethical approach” does not exist in many privacy or data protection laws. However, synonyms like “fairness” exist today, will be stronger under the EU General Data Protection Regulation and are increasingly being looked at by global regulators. An ethical framework is a careful articulation of an “ethos” or set of norms and guiding beliefs that is expressed in terms of explicitly stated values (or core principles) and are made actionable in guiding principles. Today, organizations with mature privacy programs have internal policies that cover their legal requirements as well as requirements that go beyond the law. Examples include industry standards or positions that an organization has chosen to take for competitive reasons. The policies are usually the basis for operational guidance and mechanisms to put such guidance into place. While privacy law may be clear, newer requirements, such as assessment processes that address fair and just processing and the impact to individuals, are less so. Organizations are challenged to translate ethical norms for data use into values and principles that become policy and ultimately operational guidance, which includes data processing assessments. Such guidance can serve as guiderails for business units that need to meet ethical standards for data use that go beyond privacy. While IAF has defined a broad ethical framework as part of the Big Data Ethics Initiative , there is currently a gap in the guidance, specifically the translation of this ethical framework into values and action principles that an organization can express as internal Policy. This is a key connection point in the path to operational guidance such as a Comprehensive Data Impact Assessment (which can include privacy impact assessments and data protection impact assessments) developed as part of the IAF’s Effective Data Protection Governance Project that incorporates ethical data use objectives. This translation step also helps organizations establish the ultimate guiderails they want to use. As a parallel example, many organizations have standards of (business) conduct. These are often developed or start off with a describable set of Values that are then codified into a set of Principles and ultimately into Policy which serves as the means to communicate to employees their expected behavior. In short, the “Principle” often serves as a key bridge between Values and Policy, thereby creating a meaningful framework that can then be operationalized in the organization. What is needed to advance this dialogue is a starting point for what an “ethical framework” might look like and how the various layers or levels might be described. In a pictorial model, such a framework could look like this: Key to the ethical framework is a starting point for what the Principle (Core and Guiding) layer could look like. Below is an example of what this layer might consist of. It was developed using a combination of the IAF’s Big Data and Ethical Values , the AI principles/values , How to Hold Algorithms Accountable from an MIT Technology Review and “ Principles for Algorithmic Transparency and Accountability ” from the ACM. They are written in “neutral” language as it is envisioned organizations would adapt them to fit their own environments as well as potentially translate them for external communication as they see fit. They go beyond what are legal requirements. Ethical Data Use Core and Guiding Principles Beneficial – Uses of data should be proportional in providing benefits and value to individual users of the product or service. While the focus should be on the individual, benefits may also be accrued at a higher level, such as groups of individuals and even society. Where a data use has a potential impact on individual(s), the benefit should be defined and assessed against potential risks this use might create. Where data use does not impact an individual, risks, such as adequately protecting the data and reducing the identifiability of an individual, should be identified. Once all risks are identified, appropriate ways to mitigate these risks should be implemented. Fair, Respectful, and Just The use of data should be viewed by the reasonable individual as consistent, fair and respectful. Data use should support the value of human dignity – that individuals have an innate right to be valued, respected, and to receive ethical treatment. Human dignity goes beyond individual autonomy to interests such as better health and education. Entities should assess data and data use against inadvertent, inappropriate bias, or labeling that may have an impact on reputation or the potential to be viewed as discriminatory by individual(s). The accuracy and relevancy of data and algorithms used in decision making should be regularly reviewed to reduce errors and uncertainty. Algorithms should be auditable and be monitored and evaluated for discriminatory impacts. Data should be used consistent with the ethical values of the entity. The least data intensive processing should be utilized to effectively meet the data processing objectives. Transparent and Autonomous Protection (engagement and participation) As part of the dignity value, entities should always take steps to be transparent about their use of data. Proprietary processes may be protected, but not at the expense of transparency about substantive uses. Decisions made and used about an individual should be explainable. Dignity also means providing individuals and users appropriate and meaningful engagement and control over uses of data that impact them. Accountability and Redress Provision Entities are accountable for their use of data to meet legal requirements and should be accountable for using data consistent with the principles of Beneficial, Fair, Respectful & Just and Transparent & Autonomous Protection. They should stand ready to demonstrate the soundness of their accountability processes to those entities that oversee them. They should have accessible redress systems available Individuals and users should always have the ability to question the use of data that impacts them and to challenge situations where use is not consistent with the core principles of the entity. The IAF believes it is important to have a lingua franca that enables a broad dialogue around not just how fair data processing is considered but also how an ethical framework helps implement the resulting values and principles. Let us know what you think.
- Accountability Does Work
143,000,000 people were the victims of a recent data breach when their data was stolen from Equifax, a company that has an obligation to keep their data safe. Data security is tough. The bad guys only need to be successful once, while companies need to win every time. However, from the perspective of many consumers, Equifax has not responded in the most responsible way to this data breach. The company’s website did not have a section to help consumers with this breach. So, people had to find the special website, visit a second time to request free credit monitoring, and many have still not received acknowledgment that their requests were received. Accountable organizations need to be responsible and answerable. They must be transparent about their processes and stand ready to demonstrate those processes. It seems like Equifax does not reflect these concepts of accountability. So does this mean that accountability does not work? I believe that the opposite is true. Could the event have been prevented by a different regulatory approach? Like, for example, a regulatory system that describes, in great detail, the data security actions a company would have to follow. The reality is the rules would be dated the moment they were enacted. Instead, appropriate security is explicitly required by the safeguarding rule and implicitly by Section 5 of the Federal Trade Commission Act. So, the obligation is there. Regulators could do spot audits; however, regulators will never have the bandwidth to examine every organization. An accountability approach creates the obligation and requires companies to implement the right tools to meet that obligation. Data breach laws required the company to announce data breaches that are impactful, and Equifax did so, creating the mechanism for being answerable. Once the breach was announced, news reporters began their own investigation. At least three federal regulatory agencies have announced investigations, and they have been joined by 34 state attorneys general. The company’s stock has fallen by a third. Several class action lawsuits have been filed. I have great confidence that the company will be answerable for its behavior. The market and regulatory punishment will be what encourages other companies to behave in a manner that consumers would find more appropriate. At the end of the day I believe accountability does work. Where laws require accountable behavior and appropriate disclosures the mechanisms to hold companies responsible and answerable do work. And I also believe that accountability allows for the best combination of data driven innovation and individual protections. As innovation powers forward and companies find new applications of data use, they will increasingly be expected to be accountable for the impact those data uses have on people. The IAF recently issued enhanced essential accountability elements for artificial intelligence that are also applicable to advanced analytics. The IAF is arguing that the price for being trusted to use data robustly is stakeholder focused accountability. So, as data is used more and more robustly, let’s enhance accountability as part of data governance infrastructure.
- Latin American Data Export Governance
Data flows are global, but privacy laws are local. I first uttered that statement in the last century during initial discussions on whether the United States had adequate privacy protection as defined by the 1995 European Union Data Protection Directive. At the time, I argued that privacy protections in the United States were a mosaic of federal and state laws, media attention, and private litigation that made the U.S. system effective — and effective is adequate. I also argued that the change in the wording of the Directive from equivalent to adequate was significant. Alas, the U.S. was not among the handful of countries found adequate. That was a simpler time before terrorism made government use of private sector data more globally pervasive, the use of observational data had accelerated, big data had become part of our vocabulary, and cars were not part of the internet of things. So, the question of adequate countries has become much more complex. Comparing country laws and systems to other country laws and systems has become more problematic. If anything, it has made governance alternatives to adequacy more and more appealing. The simplicity of the Canadian accountability requirements for data export has become more and more attractive. Latin America has now entered this complex adequacy equation. Personal data must flow from Latin American countries to the rest of the world for Latin Americans to be part of the global society of connected individuals. Latin American data protection authorities have an obligation to make sure their national citizens are protected when data goes beyond borders. Latin American interests mirror those that we see in Europe and Asia. As Brazil contemplates new legislation and the Ibero-American Data Protection Network Standards foretell revised legislation in other jurisdictions, it is useful to contemplate how policy makers might achieve protection and the free flow of data in highly complex ecosystems. The comment period just closed on a draft decree from Colombia’s Superintendent of Industry and Commerce (“SIC”) on data transfers. Colombian law and secondary regulations require data only be transferred to countries with adequate privacy protections unless there is an exception. However, Colombia’s concept of transfers is very different than what one would find in European law. Colombia’s secondary regulations differentiate between a transfer, where data is exported to another controller, and transmissions, where data is shared with foreign processors. It is likely that most of the data that leaves Colombia is going to a processor, which means it is a transmission. Both transmissions and transfers are subject to a 2015 SIC decree on accountability. That means that controllers are always responsible for the data they share with others, and most controllers identify and mitigate the risks related to data movement. I filed comments on this latest draft decree. [add link] The draft SIC decree lists countries that have been determined to be adequate. That list includes countries that are members of the European Union, most of those determined to be adequate by the EU Commission, and the U.S. I believe the U.S. was found to be adequate, not because privacy law and enforcement were found to be similar to Colombia’s, but rather because the U.S. is effective in protecting against careless and harmful data processing. Determining the adequacy of another country’s data protection and privacy protections is always difficult and complex. After 20 years, it is gratifying that the effective argument has some standing. But in the end, it is the accountability decree that is of most importance. Whether it is a transfer or a transmission, a data exporter owns the risks to others associated with all data processing phases, including movements across borders. Most new and proposed general data protection laws contain accountability provisions. Linking accountability to responsible data movement is an effective means for signaling companies that they have ongoing obligations when data is moved. The due diligence they take to mitigate risk when moving data is what is ultimately important. For example, U.S. financial institutions do not have adequacy requirements, but they do have data safe guarding requirements that require high levels of assurance that the organization stands accountable when processing outside the United States. My Colombia comments place an emphasis not on blanket requirements but rather on requiring organizations to understand the risk to people associated with a data export, and require contract provisions related to those risks. It is my view that tailored provisions protect against reticence risk as well as the other risks associated with processing. The bottom-line for Latin American regulators, and those in Asia as well, is that protection of individuals will come more from accountability requirements than all the hours spent assessing whether other legal systems meet an adequacy test. And for companies, adequacy is not a getout-of-jail free card. Companies will still have to have policies and procedures to protect individuals when data moves. The IAF has an Americas Discussion Group. If you have an interest please contact Marty at mabrams@informationaccountability.org .
- The Colombia Congress Matters
There seems to be a privacy conference every week in the United States or Europe. However, privacy training and policy development in Latin America is not nearly as well developed as that in the United States and Europe. Latin America has one annual conference that is clearly considered the conference of conferences. It is organized by the Superintendent of Industry and Commerce of Colombia (SIC), the country’s data protection authority. For the past five years, I have had the honor of being the adviser to the International Data Protection Congress in Colombia (Colombia Congress), and for the past four years the Information Accountability Foundation (IAF) has been a co-sponsor of the Colombia Congress. My congratulations and thanks to María Claudia Caviedes Mejía, SIC Deputy Superintendent for Data Protection, and to the entire SIC for another outstanding conference, and my thanks to them for working with the IAF. What makes the Colombia Congress special? First, I believe it is the commitment by SIC to make the sessions cutting edge and to make the conference truly international by including experts from Europe and North America. The Colombia Congress always includes a panel that is composed of their counterparts from other Latin America countries Second, the Colombia Congress focuses on an agenda that deals with not only current Colombia issues but also sets a pathway to the future. In my opening remarks, I related data protection to the best international football (what Americans call soccer). The best teams have strikers who anticipate where the ball will be and are there when it arrives. However, the strikers are dependent on the midfielders and backers who must basically control the ball until the time is right. Data protection is a mixture of managing today’s issues while preparing for the inevitable shocks from new technology. I believe SIC organized a conference that included that mix. The sessions focused on how Colombia and other Latin countries might govern global data flows in a manner that are consistent with national laws and still work with modern mechanisms such as cloud computing. The sessions also focused on the fast pace of change in observational technologies and how they will impact compliance in future years. There were also sessions on why data impact assessments are so important and how models that demonstrate compliance with those assessments might be conducted. There was a session and numerous references to how the EU General Data Protection Regulation will impact Latin America and to the potential confusion due to the draft EU ePrivacy Regulation. Lastly, there was a session that discussed the APEC transfer model, its relevance to Latin America, and the importance of regulator cooperation to the success of APEC. The Colombia Congress is a conference where people truly come to learn and think. This dedication leads me to continue to believe that information policy dialog is important. For example, Friday evening, I was approached by one of the South American commissioners who has participated in the Colombia Congress for years. He said the conference every year opens participants’ eyes to new ways in which they might think about data protection. He also said the Congress must continue. SIC and IAF have already agreed to begin work on year six. For me, of all the conferences IAF does, the Colombia Conference is the most satisfying. My global colleagues, I suggest you come South and join in the meaningful policy discussions that are taking place. My colleagues who traveled to Bogota, thank you again for joining IAF in this journey.
- Assessments are the Hub of a Forward-Looking Data Protection Program
The term “assessments” appear a great deal in IAF work. We have written about comprehensive data impact assessments, ethical assessments, digital marketing assessments, Canadian assessments and legitimate interests assessments. All these references are part of the same theme; a family of comprehensive assessments of how data is used and how it impacts individuals is necessary to determine if processing is legal, fair and just. From the earliest days of privacy, there has been an implied requirement that organizations know how they are going to be using data so they would be able to describe the use to individuals. In the 1970’s, the implied requirements were not hard. Prior to data base technologies, data was typically collected and used for specific, straight forward purposes. By the early 1990’s, information aggregators, such as TRW Information Systems and Services and Acxiom, were beginning to use data for numerous purposes, and the first privacy impact assessments were developed. They were not developed in response to data protection law but rather to avoid reputational risk for the companies involved. Privacy by design, as a governance discipline, required organizations to fully understand what they were doing with data and why. The growth of accountability based governance did much the same. FTC consent decrees requiring privacy management programs accelerated the assessment process. Canadian regulators took this to the next level with guidance on privacy management programs. The General Data Protection Regulation, that goes into effect in exactly a year, has made the requirement that one conduct assessments explicit in three ways. One is related to “record keeping” requirements, balancing of interest and the ability to demonstrate many parts of “accountability”. This is the first instance of a legal requirement to in effect perform an “assessment. Second, in certain areas of processing likely to create risks to individuals, an explicit assessment requirement is noted; one that assesses a broader range of rights and implications than is contemplated in a core Privacy Impact Assessment. Fines are part of potential sanctions for organizations that should conduct assessments but do not. Finally, to determine the legitimacy of processing a “balancing” process is required. The European Union Article 29 Data Protection Working Party issued draft guidance on Data Protection Impact Assessments and described in detail when such assessments would be required. As part of the consultation process, the IAF provided comments. The IAF, whose mission is accountability based governance of information processes, sees assessments not just as a legal requirement but rather as the hub or lynchpin of an information governance program. No matter whether a company is justifying the use of legitimate interests as the legal basis for thinking with data or assessing to understand the risks associated with data processing, there are steps that inform the organization, documents accountable processes and that facilitate oversight. This process begins with some common steps: a description of the processing that will be conducted; the data that will be used for that processing and the obligations that are associated with that data; an identification of the stakeholders impacted by the processing; the risks to the stakeholders if the processing is or is not conducted; the benefits that come from the processing and who receives the benefits. So, the IAF sees assessments as the central part of effective data protection governance. We see it as the basis for, not just legal fair and just but a core element to assessing the ethical processing of data. We see it as the means of demonstrating compliance. We see it as means of creating confidence in new, innovative uses of data. It is the hub of a forward-looking data protection program. The threat of fines is a great motivator for creating assessment processes. But in the end assessments should serve a business need in this digital age. Companies should conduct assessments because it sustains and enables their data driven business processes. Assessments are not easy. They often will raise contentious issues within organizations. They require internal oversight and governance processes to address these issues. But in the end, they will liberate organizations to both enhance shareholder value and let data serve individuals.
- AI Without Data is Artificial Ignorance
Many years ago, I attended a seminar in Prague on the state of credit scoring in numerous locations in what had been Soviet Europe. I was taken aback by the stretch to find any data that would facilitate the growth of consumer markets through credit. This followed a session on credit scoring at the International Conference of Data Protection and Privacy Commissioners at Santiago de Compostela that focused on two case studies. The first was the UK, where adequate data was available, but decisions were the product of profiling. The second was France, where having only negative data led to flawed, inaccurate scores. When thinking through conflicting policy objectives, we like to say on one hand this, on the other hand that. When dealing with information policy conundrums we need three hands for the balancing process. On the first hand, we want better outcomes. But better outcomes, be it in health, credit, education or economic growth, require the constant evolution of advanced analytic tools fed by data. That was the dilemma facing the World Bank in modernizing Eastern European economies. On the second hand, we are concerned that probability based decision making has impacts on both autonomy and the humanity of the decisions we make. So, data protection authorities were concerned that credit scoring is profiling. On the third hand, the accuracy of the decisions made by analytic tools are only as good as the quality and quantity of the data available to the tools. So, authorities were concerned that credit scoring was compelling but that the data was not adequate. With these conflicting policy drivers, balancing is best handled by a three-handed mythical being, not by mere human policy makers who only have two hands. Artificial intelligence (AI), the next step in advanced analytics, is already here. I am not talking about interesting consumer applications, like Alexa, but rather the machine learning tools already in place to curb fraud and make networks safe. These are not controversial applications, but the controversial ones will follow. For example, there recently was the failure of a major application of AI in cancer treatment and research that, in part, was impacted by the lack of sufficient data. This failure led to this blog’s title quote, “AI without data is artificial ignorance,” coined by my colleague Stan Crosley. Over the past few weeks, I have been absorbing the impact of the proposed EU ePrivacy Regulation on the development of advanced analytics. The proposed ePrivacy Regulation is grounded in the fundamental right to the respect for private life, in particular with regard to communications. This means it revolves around the individual’s ability to be the controller of the data pertaining to his or her communications from his or her terminals, be they a PC, phone, smart car, or smart medical device. The EU General Data Protection Regulation (GDPR) links to the fundamental right to data protection, which looks for guidance to the full range of fundamental rights impacted by the processing of data. For the past four years, the IAF has been looking at means of governing advanced analytics by looking to the full range of rights and interests of all stakeholders, putting the individual first. AI has caused IAF to focus more on how ethics impacts both analysis and the application of that analysis. All of this is based on the broadness of data protection as a fundamental right. The GDPR governs both thinking and acting with data. IAF increasingly understands how to synthesize the three-handed mythical being in conducting assessments in a manner that aligns with the GDPR. However, the proposed ePrivacy Regulation is so broad in scope that it may very well govern all the data touched by communication coming off the many terminals we all touch in our daily routine. The proposed ePrivacy Regulation covers not just the browser on our PC but the apps on our phone and every IoT device as well. Governing research based on data coming from all those sensors and how they impact each other will be very hard to do based on explicit informed consent, as required by the proposed ePrivacy Regulation. The IAF has formed Discussion Groups based on the challenges raised by AI, with a new focus on life sciences industries. AI in health sciences will rely on the ability to make use of data coming from different types of sensors governed by different privacy protocols. I have great confidence that IAF will be able to create assessment processes that will handle the threehanded balancing process. But that will only be true if the data is governed in a manner other than explicit consent. I would truly hate for AI, in the end, to be artificial ignorance.
- Comprehensive Data Impact Assessments Set the Stage for Accountability 2.0
There is no disagreement, whether in Europe, the Americas or Asia, that a fully connected world requires those that think and act with data to demonstrate that their processing of data is legal, fair and just. Demonstration requires comprehensive data impact assessments that help organizations discover the issues for all stakeholders. Discovery of issues helps determine that data is processed appropriately, particularly when uses go beyond the common understanding of the individuals to whom the data pertains. Over the past year, the Information Accountability Foundation (“IAF”) has had the pleasure of working with many Canadians in developing an assessment framework for Canadian organizations thinking and acting with data. The IAF received a grant from the Office of the Privacy Commissioner of Canada to develop and evaluate an assessment process for thinking and acting with data in Canada. The IAF further received support and participation from twenty companies in actually developing an assessment framework that would be an extension of Canadian privacy impact assessments. The product of that work was evaluated with other stakeholders last December. The project would not have been possible without the wise assistance the IAF received from Adam Kardash of Osler, Hoskin & Harcourt LLP. On February 28, 2017, the IAF submitted the assessment framework and its report to the OPC. Both are proudly posted on the IAF website. The IAF believes the report and assessment framework demonstrate that similar frameworks, that both link with local law and global trends, are doable. The IAF looks forward to doing further work in Canada on scalable oversight of assessments and in other locations on developing a growing tool chest of comprehensive assessments.
- Fairness and Unfairness Moving Farther Apart
Fairness has become a huge data protection policy driver in Europe and the Americas. Fairness is often hard to define in definitive terms, but the parameters of fairness are well known. A fair data application creates identifiable value for individuals, mitigates risks to those individuals, and confirms the data is used within the context of the relationship between the individuals and the data user. Fairness has become more important as consent has lost some of its effectiveness for governing data use. As fairness has become more important, privacy regimes globally have increasingly looked for means to establish that processing is fair. As sensors collect data, insights come from big data processing, and artificial intelligence is applied in a growing number of circumstances, the popularity of assessments to assess risks and determine fairness has increased as well. We see this reach for assessments to assure fairness in the European General Data Protection Regulation, the consent consultation in Canada, and even the draft legislation in Argentina. The Information Accountability Foundation work on Effective Data Protection Governance is grounded in the concept of fairness, and our assessment processes are based on creating a means for data users to demonstrate what they are doing is legal, fair and just. The United States’ unique approach to privacy is always a challenge to global data flows. Free expression is guaranteed by the First Amendment to the Constitution, so an organization gets the benefit of the doubt as it both observes what it is free to monitor in the public commons and uses that data to creatively think and communicate. That freedom has boundaries. One cannot use observation and insights to cause substantial injury, and one cannot deceive people for commercial gain. Deception as an enforcement norm was fully explored in the 1990’s. The Clinton Administration and the Federal Trade Commission pushed companies hard to disclose what they were collecting and observing, how they would use that data, and how it might be shared. Once a company published its privacy policy, if it lied, the FTC had an enforceable action. Section 5 of the FTC Act covers Unfair and Deceptive Acts, and the FTC was more than willing to use deception. However, unfairness was a different story. Robert Pitofsky, FTC Chairman in the 1990’s, told me personally the FTC would not use unfairness in privacy enforcement. Chairman Pitofsky believed that unfairness required the FTC to prove that a specific data use ran the risk of causing substantial injury, and he believed that test was a bridge too far. That reluctance changed when Timothy Muris became Chairman of the FTC in 2001. Chairman Muris believed protection should occur where there was substantial injury, part of the test required for unfairness under the FTC Act. He also found substantial injury could be an obnoxious intrusion in one’s life the consumer could not avoid. He found endless phone calls at dinner hawking all sorts of goods and services as substantial injury that could not be counter balanced by competition. While the injury to each individual was small, aggregated overmillions of people, the combined injury was substantial. 92 percent of the American public found telemarketing to be always intrusive so the Telemarketing Sales Rule was revised under his leadership and the Do-Not-Call list was created. For the next fifteen years, the concept of unfairness at the FTC, mostly related to data security, slowly expanded on a case by case basis. Unfairness has always been a tricky concept for the FTC. In the 1980’s, when Congress believed the FTC was over using unfairness without establishing substantial injury, the agency ran into difficulties and had its budget and staff cut. As unfairness began to emerge as a more important enforcement tool for privacy and security, many argued that substantial injury requires something more than a sense of moral outrage. Instead, it was argued that there needed to be some empirical means to measure potential injury so abridgement of free enterprise is not unwarranted. On February 6, 2017, the FTC settled with Vizio in a case related to the second by second collection of data from smart TVs on the programs consumers were watching. The FTC asserted that Vizio’s behavior was both deceptive and unfair. Acting FTC Chairman Maureen Ohlhausen, in a concurring statement, agreed Vizio was deceptive but questioned the unfairness allegation. The FTC staff argued that Vizio’s collection of sensitive viewing habits was unfair. Ohlhausen argued that from a policy perspective, the information might indeed be sensitive, but the FTC Act requires that for a behavior to be unfair it must be “a practice that causes substantial injury that is not reasonably avoidable by the consumer and is not outweighed by the benefits to competition or consumers.” She went on to say she will “launch an effort to examine this important issue [what constitutes substantial injury] further.” In the telemarketing issue, Muris found millions of intrusions aggregated over millions of consumers created collective substantial injury. Where is the quantifiable injury at Vizio? As a consumer that owns a Vizio smart TV (I actually do), I do not believe the behavior was fair. However, I also do not believe Vizio’s behavior meets the unfairness test in the FTC Act because something more than a sense of moral outrage is necessary. The European Working Party 29 issued an Opinion on Legitimate Interests in 2014 that essentially said it is up to a company that wants to use legitimate interest to demonstrate the processing will be fair. In Canada, under the accountability principle, companies using data robustly must demonstrate the use is fair. Draft legislation in Argentina places the burden on the company to demonstrate fairness. As concepts of unfairness were slowly expanding in the United States, and as governance outside the United States was increasingly relying on. The concept of privacy harm also became part of the privacy mix when Muris was FTC Chairman. The Asia Pacific Economic Forum (“APEC”) adopted a privacy framework that added a ninth principle to the OECD eight, with the title prevention of harm. The concept that prevention of harm is a core data protection goal was ratified in the new European General Data Protection Regulation. Harm seems to be broader concept than injury, not requiring the same level of empirical evidence assessments to demonstrate fairness, the differences in consumer privacy protections were narrowing. However, there has always been a sense that the test for unfairness, risk of substantial injury, creates a high bar in the privacy area. Furthermore, in the United States, the burden is on the FTC to prove unfairness, where in other locations the burden is on the organization to prove its activities are fair. Ohlhausen’ s project to better define what constitutes substantial injury in relationship to personal data is prudent and timely. I support it. However, at least for some period of time, this issue will increase the divide between the U.S. and these other jurisdictions.There is never a good time for the privacy divide between the United States and the balance of the world to widen. New technology is creating increasing stresses, and highly innovative smaller companies are pioneering applications that only make logical sense when expanded to a global environment. The U.S. was not deemed adequate by Europe before this case, and Ohlhausen’ s concurring statement will not change that. So it is up to companies to demonstrate that they manage data in a fair fashion. For companies providing global services, the direction of Effective Data Protection Governance(link) is prudent. If one can demonstrate that data is processed in a legal, fair and just manner, the processing will almost surely not be unfair. On the other hand, if one manages to a standard based on substantial injury, not avoidable by the consumer or counter balanced by benefits to markets and consumers, one may be safe from enforcement in the United States, but one may be out of step with global expectations. While regime gaps might widen, accountability gives companies guidance that minimizes global corporate risk.














