top of page

Search Results

95 results found with an empty search

  • The Fair Information Policy Development Vacuum

    Over the past decade, policy development in the data protection field has been very robust, with some good and bad results and some results that are a muddle. Yet, with all this activity, there still seems to be a sense that there is a policy vacuum that cries to be filled. In the simplest terms, people need to be protected while digital innovations need to work. Policy has a way to go to meet those dual objectives. The IAF explicitly heard this when I was in Europe in early September. A key question is how should policy innovators respond to this dilemma? For analysis purposes, policy innovation may take place in any one of four domains – legislation, regulatory guidance, business innovation, and think tanks and academic centers. Some examples, broken down by domain, include:  Legislation – Policy innovation may be driven by the actions of legislators who drive policy innovation through the requirements they create: The GDPR and privacy laws in various jurisdictions, including Brazil and Japan; U.S. state legislation in California and Nevada. Regulatory guidance – This may be in the form of regulations that are an extension of the law or guidance that informs best practices: Canadian 2012 guidance on accountability; Interpretations of the GDPR; Future looking reports and workshops by and from agencies such as the EDPS and the FTC. Business innovation – organizations may drive the definition of fair practice based on the actions they take to self-regulate: New corporate tools such as privacy dashboards, automated assessment tools, and tools to map data within an organization. Think tanks and academic centers – These organizations may create innovative policy solutions that then may be adopted in one of the other three domains above: Think tank papers from FPF, CIPL, IAF and professional associations; Scholarly papers from academics in law, computer science, and public policy. There is no question that the domains impact each other. For example, the law generates base requirements that are then interpreted by regulation. Further guidance may be influenced by regulators seeing best business practices and participating in think tank projects. For example, there is a direct relationship between the essential elements of accountability developed by the Global Accountability Dialogue and the 2010 Article 29 Working Party opinion on accountability. Greater clarity, while desired, has not generally been the result of all this work in all four domains. Organizations in the vanguard of new data driven business models are still making decisions with little confidence that the decisions are based on a firm sense of what will be considered acceptable. Even worse, a great deal of data processing that would benefit people is not being done because of an inability to derive what will be acceptable. The IAF refers to this as reticence risk. Lastly, “edge riders” push through solutions that create a trust deficit because muddle may indeed reward those that take advantage of the uncertainty around what is acceptable. Many policy thinkers thought that the big fines in the GDPR would help resolve this muddle, but there is little evidence that it has. The muddle just gets worse as the technology continues to facilitate an observational world that drives advanced analytics in all forms. There is no shortage of new compliance requirements, but they are not responsive to the challenges related to purely data driven research, AI application, Internet-in-Things and virtual reality. In part, this is because the policy developments are informed by what we know, history, rather than by what we anticipate, the future, because anticipation may be seen as being speculative. So trustworthy data practices are informed by the past rather than in anticipation of the future. Therefore, the means for establishing that new data practices in highly digital ecosystems are reasonable are still very uncertain.       This uncertainty has existed since the consumer Internet made observation fairly easy.  Yet business seems to feel the policy vacuum more now. Why is there a sense the vacuum is increasing now? Some of this unease is fed by technology’s fast migration.  However, there are current conditions in many of the domains mentioned above that impede solutions and lead to uncertainty and thus the growing vacuum. Grouped geographically these conditions include: Europe: The European Data Protection Board is focused on operational issues, and that is expected to be the case for the next few years. The European Data Protection Supervisor often has been where new issues get analyzed and conceptual frames clearly articulated. A policy innovator, Giovani Buttarelli, recently passed away, and the appointment of a new EDPS is necessary since the current mandate for his assistant expires in December. Some uncertainty has been added since there does not seem to be a timeline for that appointment. The new EU Commission has an uncertain digital policy structure, and there is a lack of experience at many of the leading EU enforcement agencies. Brexit is absorbing ICO resources, and Brexit removes the UK ICO as a key regulatory agency within the EDPB.  There are inconsistences in guidance issued by various countries on numerous GDPR provisions, leaving companies to struggle with which guidance to follow. There are provisions in the GDPR that require political decisions at the country level (for example research), and the decisions that have been issued constrain innovation.  There is a sense that some businesses are playing shell games with their GDPR programs. North America The Canadian political process is signaling new legislation in the next legislative cycle There is friction between the digital roadmap and the legacy consent-based system with stretched consent. Because an ombudsman regulator exists under the Personal Information and Electronic Documents Act, a debate is occurring about whether greater enforcement powers are necessary. The tension about regulatory powers is exacerbated by the fast evolution globally of data driven business models. Canada’s adequacy will soon be reviewed by the EU Commission. There is a growing consensus the United States needs a new policy approach, but cross cutting cleavages make a consensus on what that might be very problematic. Tension exists between the desire for innovation and the need to curb excesses The internet knows no boundaries so bridges to international interoperability need to be built. The second California ballot initiative is an example of a state solution due to a federal failure to act. The policy ecosystem feels most comfortable with incremental change, but the U.S. has no history of comprehensive privacy law. Asia There is a desire for European adequacy Singapore, and maybe others, want to forge a new Asian way There are conflicting drivers in China Given the previous policy developments and the current conditions, the question is which challenges should each of the four domains in policy innovation address first to get the best chance of creating a model for protecting innovation in evolving data and analytic ecosystems, reducing reticence risk, and creating a roadmap for ethical data use where data serves people. The IAF cannot answer this question for the other policy innovators, but for itself, the IAF’s current endeavors are designed to be both strategic – a new fair processing legislative model to address future issues not past – and opportunistic – a change to Canadian privacy law that will help inform other jurisdictions. The answers to the questions of what projects and in which geographies will help the IAF define where it operates geographically and which projects it pursues in order to have the right impacts pursuant to its research and education mission. Continuing to be strategic and opportunistic should enhance the likelihood of innovation in evolving data and analytic ecosystems being protected, reticence risk being reduced and a roadmap for ethical data use where data serves people being created.

  • Hard Law Must Mesh With Contextual Based Fair Use

    The 41st International Conference of Data Protection and Privacy Commissioners (ICDPPC) will take place in late October in Tirana, Albania. The ICDPPC is a conclave of enforcement agencies seeking the means to create commonalities in a world where data flows ubiquitously. ICDPPC will explore a key dilemma that challenges the growing community of data protection authorities and their constituencies in civil society, academia and business. In particular, one of the sessions will explore the expanding observational ecosystem, where data equals power, and whether and how the key accountability concepts necessary to drive ethical data use are truly enforceable. This discussion highlights the tension between hard law – that expressed in statutes and regulations – and soft law – that expressed through guidance. To be perfectly clear, leading an entity called the Information Accountability Foundation (IAF), I have a prejudice for soft law – for policy that guides accountability processes that drive legitimate, responsible, answerable data use. Accountable organizations must make data driven decisions that are legal, fair and just. To achieve legality, one needs enforceability. Enforceability may not be needed to drive trustworthy behavior at all organizations, but it is a necessary prerequisite for trust in ecosystems where organizations have varying motivations.  So, what drives enforceability? Enforceability is driven both by mandated obligations designed to achieve processing that is fair as well as legal and by agencies that have authority to enforce those obligations. The IAF has put forward fair processing legislation in the United States that defines new obligations and balances the interests of stakeholders but also creates the authority to enforce those obligations. Similar legislative debates are being held in other jurisdictions. Still other countries are exploring guidelines so that there is definition on how organizations should make reasoned and responsible decisions even as they stretch the mandate of legacy law. The European General Data Protection Regulation (GDPR) requires accountability processes such as data protection officers (responsible parties), privacy by design, data protection impact assessments, and balancing interests when legitimate interest is used as a legal basis to process personal information. However, European hard law will be challenged by the Internet in Things (yes in) and, for example, its ties to health research and practice. So soft law, where there are already mechanisms for it in the GDPR, will play a role in achieving legal, fair and just processing in ecosystems where the Internet in Things and Artificial Intelligence (AI) will play massive roles. This may be soft law, but there is the authority to enforce in the GDPR. It is soft law in that an obligation for process is mandated but the description of the process is not mandated.  That description must be driven by the context of the processing to be conducted. In effect, soft law can and should be enforceable. There are countries, such as Mexico and Canada, where accountability is contained in the law, but typically, accountability is described through guidance rather than firm structures in the law. The IAF has found, on average, Canadian businesses have better institutionalized accountability structures because of regulator guidance, even though enforcement powers as related to accountability are not mandated by the law.  Furthermore, Hong Kong and Singapore are using accountability to give guidance on behavior related to AI.  So, one has hard law in Europe in terms of requiring the structures for accountability, and soft law in other jurisdictions that present a clear picture for accountable processes. We have much to learn from the entire data protection and privacy regulatory community.  As we enter the debate that will take place in Tirana, I, as a member of the debate, will stipulate that the obligation for both basic and advanced accountability processes, and the ability to enforce related to those processes, should be part of the hard law and that they should not be overly prescriptive. Specifically, this obligation should be part of the laws that enhance human kind by protecting us from both inappropriate processing and the absence of processing. I would also suggest that the laws need to create the space for processing that is ethical but can’t even be anticipated by today’s law makers. The IAF, along with the OECD and CIPL, is organizing a side event in Tirana on building the ground rules for Accountability 2.0. When the IAF talks about accountability 2.0, it is referring to the added accountability elements such as stated organizational values, assessment processes that include defined stakeholders, risks and benefits for those same stakeholders, and controls that facilitate both internal audits and external validation as key components. The session will begin the next stage of debating enforceable soft law to protect the interests of people. I am so looking forward to this discussion at the ICDPPC.

  • Look to Baseball for an Example of Accountability Oversight

    One of the ways to enhance the trustworthiness of accountability processes is for there to be oversight of the accountability process.  Sometimes this oversight concept is difficult to understand or is misunderstood because organizations worry that what is meant is second guessing of the results of the accountability process.  One of the requirements of accountability is having in place systems for internal ongoing oversight and assurance reviews and external verification.  A recent IAF blog asked verification of what? (link)  External oversight of demonstrable processes is not a review of every decision but rather a validation of the processes behind the decisions. What does this mean?  Minor league baseball provides an example. This season the independent Atlantic League of Minor League Baseball first used a computer to call balls and strikes in its All Star Game.  The plate umpire wore an earpiece connected to an iPhone in his pocket and relayed the call upon receiving it from a TrackMan computer system that uses Doppler radar.  After its successful “tryout” in the All Star Game, the “automated ball-strike system” (ABSS) was implemented for the rest of the season.  It is anticipated that after the bugs are worked out (for example, ABSS deems a bounced ball that crosses the plate as a strike), the ABSS may be “tried out” in the Major League.  Some will say this example is not relevant because a baseball player’s statistics, such as batting averages, are not personal information.  Baseball players have given the Major League Baseball Players Association the right to license their names and statistics.  Others argue that the use of baseball players names and statistical information should be freely available under the First Amendment to the U.S. Constitution.  Whatever the legal theory, the calling of balls and strikes is part of a baseball player’s statistics.  Does he strike out a lot?  Does he walk a lot? So how does the use of the ABSS in Minor League Baseball demonstrate oversight is of the assessment process and not of the assessment?  When the season is over, Major League Baseball will look at how accurate the ABSS was in calling balls and strikes over the season.  Major League Baseball doesn’t care what the results were with respect to a particular player or a particular team or a particular game.  But it does care how accurate overall the ABSS was and whether the results were achieved with integrity and without bias.  This is what is meant by overseeing the assessment process.  The overseer should not care about the results of a particular assessment.  It should care, however, about whether overall the process was followed, whether overall the results were fair and whether the process overall had integrity and wasn’t biased.  When the oversight function works in this manner, the trustworthiness of the accountability processes is enhanced. Let’s play ball!

  • Trust Deficit Acceleration Means Trust but Verify

    Dirty diesel cars, opiates, income disparities, and institutional failures.  The trust deficit caused by these abuses or plain mistakes seems to be accelerating beyond red to bright red.  This acceleration has huge ramifications for new privacy laws and for interpretations of existing laws. The IAF staff recently visited a privacy regulatory agency to discuss how advanced analytic processing including AI that may be “people beneficial” can be considered trustworthy based on a demonstration of objectives, benefits and mitigation of risks.  One of the agency officials turned to me and said, “I bet Johnson & Johnson considered fighting pain to be beneficial to people.”  A judge’s ruling that marketing by a well-respected company exacerbated an opiate addiction problem is a far cry from online behavioral targeting.  The comparison might be apples to oranges, but trust, especially of motive, relative to objectives and processes seems to be under challenge.  Real and perceived compliance and process failures in the marketplace not related to data are having a severe impact on the trust in data driven processes as well. In compliance areas ranging from polluting diesel engines to over prescribing addictive pain killers, it isn’t just business that is seen wanting; it is all institutions from churches to governmental executive offices.  And data driven processes are not immune.  While in the past one could say, “Where is the harm in processing data?”  “It is just an ad?”  Today, processing harm is being defined in ways that get to the essence of our values as people.  The Netflix documentary, “The Great Hack,” makes the case that the threats to liberal democracies may be the price of robust observed data usage through profiling and micro targeting used in an inappropriate fashion.  In the digital age, observation in itself can be viewed as a harm.  Hong Kong demonstrators are destroying smart street lights for a reason.  The watching eye is not always viewed as benign.  These occurrences are all part of the environment of growing general distrust. For years the IAF has  received pushback to the use of the term “trust deficit.”  We have been told if you say there is a trust deficit, policymakers might hear you and believe it true. However, we believe this is an effective way to label the struggle that business faces – the use of data by business creates people benefits against the backdrop of business and institutional failures, including inadvertent and intentional misuse.  In meetings with IAF members, we have seen evidence that thoughtful companies are trusted by their customers and respected by their regulators.  But regulation isn’t targeted to compliant companies; it is targeted to a more general trust deficiency.    Well-established brands are using data to build competitive advantage and to provide benefits to multiple stakeholders.  These uses typically are a good thing.   Furthermore, if they don’t do so, they will be at a competitive disadvantage or benefits will not be created.  However, data use is based on acceptance or permission from the law or individuals.  When a company stretches uses or does not have processes to evaluate market acceptance and risk and something goes wrong, it isn’t just that company that suffers a loss in trust.   Guess what?  Politicians know there is a trust deficit and politically are reacting accordingly.  The California lawmakers have been steadfast in allowing almost no improvements to the challenging definitions contained in the California Consumer Privacy Act.  In Europe, the restrictive interpretations of the GDPR by the European Data Protection Board have made data driven research difficult if not impossible.  If the current trend continues, it may lead to bad public policy on a full range of information policy issues.  Unfortunately, the notions of flexibility are regarded with ambivalence and likened to self-regulation, rather than a sensible, governable approach to data use in a digital age   Amplifying this view is the sense that companies have proven they can’t be trusted to regulate themselves. Twenty plus years ago, observation in the form of third-party tracking software placed on individuals’ computers was necessary to make the consumer Internet user friendly and economically viable.  Business argued to policymakers that the tracking was safe and proportional, and that regulation would kill the Internet. Over the years, data driven observation has become more robust, and related data have been shared widely.  These developments have led to a few highly reported negative and sensationalist stories.  The repercussions of these assertions are just beginning to be felt.  The most immediate impact will be in the digital advertising ecosystem.  The California Consumer Privacy Act and investigations by European regulators are likely to  change the mechanics behind digital advertising.  The result will be difficult for many business models. There are consequences to legislative over reaction and under reaction.  The stakes are exponentially higher than just an advertising supported cyber ecosystem.  Observation is necessary to make an increasing number of environments workable.  These ecosystems range from smart cars to smart medical devices.  Furthermore, observed data is necessary to fuel the digital future that will drive economic growth.  Blunt instruments to limit data flows and data use will have consequences for people, society and companies. Ecosystems need observation to operate and fuel benefits society dearly wants.  At the same time, the accelerating trust deficit increasingly puts any type of flexibility in jeopardy.  This brings us back to Ronald Regan’s famous phrase “trust but verify.” The 2009 Essential Elements of Accountability required organisations to stand ready to demonstrate.  Subsequent work advanced accountability to address more complex data environments and made clear the requirements and processes for effective internal oversight. The IAF’s policy solutions create governance mechanisms that document policies and processes that make that demonstration possible.  For those demonstratable processes to be measurable, there needs to be standardized components that still allow for both corporate diversity and external review.  The IAF solutions, ranging from model legislation to ethical assessments and oversight processes, are designed to allow for verification of competency and integrity of demonstratable processes.  Before these more robust governance processes take shape, the movement is from acceptance of standing ready to demonstrate to calls for more proactive validation.  But validation of what and to whom? External oversight of demonstrable processes is not a review of every decision to use or not use data.  Straight forward uses, that are fully anticipated and understood and that meet established standards, do not need additional review.  What we are discussing is validation of the processes behind the decisions about observation and/or advanced analytics, and that those decisions are made soundly by competent people and with an honest focus on defined external stakeholder interests. However, validation or verification requires people, processes and systems at the validator, not just the controller.  What are those processes and who pays for them?    Is it regulators?  Is it accountability agents of some other form?  Is it other trusted intermediaries? How does one make it scalable to SMEs?  What level of addition transparency can help to demonstrate intentional competency and integrity to make demonstration cost effective for both reviewer and the party to be reviewed?  The IAF’s view is the answer will be a mixture.  In some cases, it will be regulators.  In others, it will be enforcers of codes of conduct or practice.  What is clear is that complete self-certification will not be satisfactory, particularly in data processing activities that can or are perceived to have a significant impact on individuals. The political challenge is arriving at a solution that is trusted and not cost prohibitive.  For that to happen, there needs to be a private sector process to arrive at a consensus on what verification might mean. This discussion is to be continued at the IAF’s West Coast Summit on September 19.

  • The Incubation of Trust Based Governance Systems Should Be Encouraged

    Increasingly, around the world, there is a shift in the way privacy legislative and regulatory approaches are being developed. This is in response to the recognition that much of our collective future economic growth, benefiting both individuals and society, will be driven through digital transformation. Most recently, Canada’s Digital Charter has recognized that digital transformation is having an enormous impact upon both its economy and its society. As the recent OECD report Going Digital: Shaping Policies, Improving Lives , notes, “Today, an ecosystem of interdependent digital technologies [– consisting of the Internet of Things, 5G networks, cloud computing, big data, artificial intelligence, blockchain, and computing power  underpins digital transformation and will evolve to drive economic and societal changes”. What all of these digital developments highlight is the need to grow trust systems that require and enable effective governance where digital benefits are realizable and digital risks are managed. This combined, multi directional approach requires new trusted accountability systems and new supporting legislative and regulatory approaches. Instead of parallel approaches that focus on independent paths of a digital economy and data protection, new models are asking how a dedication to privacy could be melded with a commitment to a robust digital future that serves people.  When this approach is taken, we achieve the view  that data should serve people.  When data is used in a manner that serves people and abates crucial risks to all stakeholders, those data uses are people beneficially activities.  It is clear that for businesses to succeed in optimizing data in a trusted way, they will need to enhance accountability mechanisms both as a way of minimizing digital risks while realizing digital benefits.  The more data intensive a business is, the more risk it creates and by extension the more programmatic ways it needs to allow for trusted data optimization. In the IAF’s March 2019 blog, A Pivot (Back) to Accountability , a shift in the role accountability plays in realization of trusted systems was suggested.  In 2009, when the global accountability project commenced, the focus on accountability was positioned as supporting the way individual rights had been thought about. These rights, at the time, were heavily centered around individual control. Today, the complexities of data flows and data use have evolved the way individual participation and organizational obligations are thought about. This led the IAF to suggest principles that update both individual participation and organizational obligations. The first part of the principles describes the rights necessary for individuals to function with confidence in a data driven world.  The second part of the principles is focused on the obligations that organizations must honor to process and use data in a legitimate and responsible manner. These principles have been translated into legislative language as outlined in the IAF’s June 2019 blog, Privacy Law’s First Objective Is That Data Should Serve People - The U.S. Opportunity To Get Privacy Legislation Right ,  and organizations are putting this advanced accountability model in place as their way of creating trusted and answerable systems. The latest example of this is Sidewalk Labs [1] in its collaboration with the City of Toronto on the Quayside smart city project. Last week, Sidewalk Labs unveiled its proposed “Master Innovation and Development Plan” ( MIDP ) for Sidewalk Toronto, a project that would design a smart city district in Toronto’s Eastern Waterfront. The proposal will be considered by the government and other stakeholders in the coming months to determine whether and how to move forward with the project. This proposed public-private partnership between Sidewalk Labs and Waterfront Toronto seeks to promote affordability and sustainability while reducing climate impact and creating new mobility solutions, such as by prioritizing mass transit and pedestrians over vehicles. As the Future of Privacy Forum noted in their overview , “the MIDP as proposed contemplates substantial data collection and use; it also proposes a range of significant legal, technical, and policy controls to mitigate privacy risks and promote data protection. In the coming year, Toronto residents and officials will analyze the MIDP and work with Sidewalk Labs and Waterfront Toronto to identify aspects of the proposal that could be modified to promote benefits and reduce risks”. There are a number of novel and interesting parts to the MIDP’s approach to trusted governance. First and foremost, the project  acknowledges that some of the urban data at the core of the Quayside effort will be personal and/or sensitive, By extension, it proposes several key measures intended to mitigate the privacy risks while at the same time realizing the “people benefits” of what are many new and innovative ways to use data. There is the contemplation of technical controls, such as employing hardware and software solutions that integrate privacy-protective data collection, use and sharing into the development and operation of the Quayside site. In addition, there are legal and organizational safeguards, such as establishing consistent and transparent processes for using urban data and independent oversight. The MIDP approach to data governance in many ways parallels the IAFs approach to trusted and answerable accountability. For example, a set of Responsible Data Use (RDU) Guidelines – what will and will not be done with data – sets a foundation for governance. These incorporate many Privacy by Design components and require artificial intelligence systems to address ethical and bias concerns. An extensive approach to assessing the impacts to individuals is contemplated through a RDU Assessment. The RDU Assessment supports the implementation of the RDU Guidelines as a mechanism for public and private entities to weigh the data benefits and privacy risks of digital products and services prior to deployment. The RDU Assessment focuses on transparency and extending protections to diverse groups and communities, in order to ascertain whether a particular technology or algorithmic use case negatively impacts individuals, groups, or communities due to biased decision-making. The MIDP recognizes the sensitivity of this type of significantly new approach to both technology and data intensive activities as part of the Sidewalk Toronto project  and starts to address the added trust mechanisms that will be required. It goes beyond the IAF’s oversight model by proposing the establishment of an Urban Data Trust that would entrust oversight and accountability of the RDU Guidelines and Assessments to a new non-profit entity.   This entity would manage urban data and technologies usage independent of Sidewalk Labs and Waterfront Toronto and would oversee day-to-day digital governance of Sidewalk Toronto projects.  Sidewalk Labs states the data trust concept is intended to build on existing privacy laws while providing additional protection and review before data-related measures are permitted to go into effect. The Urban Data Trust would also govern third-party data collection and use.  The MIDP is very clear that this outline of a proposed form of data governance is by no means complete and much more collaborative work is required to put this innovative model into place. But just as the exciting technology and data driven approaches contemplated in this project that clearly have people benefits in mind, will require incubation, so too will the trusted governance model. It is “okay” to not have all the answers yet. The incubation of these new trust models need to be approached with the healthy objective they were designed around – Data should serve people, and organizations that serve people with data in a responsible and answerable manner should be able to thrive. That approach is the way all stakeholders benefit. Responsible and answerable data environments that hold the promise of people beneficial data use should not only be allowed to fully develop, but encouraged. [1] An IAF Strategist has provided some  independent consulting to Sidewalk Labs

  • Privacy Law’s First Objective Is That Data Should Serve People – The U.S. Opportunity To Get Privacy Legislation Right

    Laws to govern the data age are extremely hard to draft.  Policymakers will encounter this  when they revise competition law to deal with data rich conglomerates.  They have already tried to address this in the privacy area through the European General Data Protection Regulation (GDPR) which recently had its one-year anniversary. However, there already are discussions about what revisions to the GDPR are necessary as data rich opportunities are created.  The California Consumer Privacy Act (CCPA) goes into effect in 2020, and numerous amendments to the CCPA have been proposed, and several already have been enacted.  None of the proposed privacy laws we have seen, either in place currently, under active revision in many parts of the world, or part of the U.S. Federal Privacy legislative debate, recognize that people have a stake both in data being used aggressively and data being used protectively.  As Giovanni Buttarelli, the European Data Protection Supervisor, said, “data should serve people.” Sunday, June 9, 2019, The New York Times editorial board published an editorial entitled “Where Is America’s Privacy Law?”  The editorial argues that the U.S. Congress is late because it has not enacted comprehensive privacy legislation.  While the IAF agrees that the United States needs an omnibus privacy law, we also believe that it is better to get the law right.   The New York Times editorial, from our perspective, saw only one side of the equation — protecting people from data misuse.  We believe both sides of the equation – preventing data misuse and allowing innovative use of data that benefits people – is required for privacy legislation that will be in effect for at least a generation.  Washington policymakers have stated that they have seen enough principles and that they need legislative language from which to pick and choose.  The IAF has posted to its website [MA1]  model legislation entitled, The Fair and Open Use Act.  That is the short title.  The full title uses the words fair, accountable, innovative, responsible and open.  All those words are very important.  The draft begins with nineteen findings.  The first finding provides: “The information ecosystem is the world’s most innovative.  It has not just driven economic growth; it has facilitated positive changes in all sectors.”  Other findings state: “The benefits of the information age belong to everyone.  Individuals justifiably expect organizations will process their data in a manner that creates benefits for the individual, or if not the individual, for a broader community of people.  Data should not just serve the interests of the organization that collected the data.”  Lastly, the findings are clear that this model legislation will be complex.  The last finding says: “We live in a complex, data-driven world with diverse business models and infinite possibilities for innovation.  This reality requires complex, nuanced, innovative and agile policy and regulatory response.”  We believe these findings set the conditions for making certain that data serves people. The IAF model is the first draft legislation that is heavily focused around the OECD accountability principle.  It provides controls for individuals, but they are secondary to the safe and fair processing of personal data to ensure data serves people.  It allows organizations to innovate with data, but in order to do so they must have sound, robust, accountable and demonstrable processes.  These are not checkbox processes, but rather robust activities conducted by employees that must be able to demonstrate that they are doing their jobs with integrity and with competence.  These processes must be enforceable by regulators. There will be comments that the type of accountability described in the IAF model legislation requires staff and systems so that data can be used in an innovative manner.  That observation is correct.  Those that have implemented the GDPR understand that data governance requires people, processes, tools and infrastructure in order for data to be properly governed.  The IAF model legislation argues that data that is used for legitimate purposes and that is well governed should be used; this is where multiple stakeholders receive benefit. The IAF commissioned the model legislation as an educational endeavor.  The IAF does not expect it to be enacted in its entirety or in its current form.  But, we do believe it will be informative to those that have the responsibility to legislate. The IAF model legislation is organized into seven articles:    [MA1] Insert link here The first article contains nineteen findings that establish not only the importance of innovative processing to every aspect of human life, and the competitive advantage it has created for America, but also the risks to individuals and society if data are not governed correctly.  Findings are not unusual in U.S. legislation, but these findings truly set up the articles that follow.  Article One also contains definitions, including definitions of provided, observed and inferred data.  This is important because the data type helps set-up the sections that reference risk. Article Two restricts data to uses that are legitimate.  There are eight legitimate uses that are interoperable with the six legal bases in the GDPR.  Knowledge creation is a legitimate use, as are societally beneficial uses.  Less anticipated uses may be legitimate, but they require assessments that must be conducted with skill and integrity. Article Three discusses the responsibilities of an accountable organization.  These align to what would be found in other comprehensive privacy laws.  There is a limited right to portability associated with data an individual provides to an organization. Article Four requires an organization to have a comprehensive program comparable to the risk that arises from the processing conducted by the organization.  The requirements include a strategic vision for data that would be implemented through a comprehensive program.  Those program requirements include privacy by design for all processing and data stewardship for automated processing.  So, the program completes a full cycle, and internal oversight is required as well. Article Five creates five risk bands which are managed through a risk management program aligned with the organization’s institutional risk management program. Article Six empowers individuals to engage with the organization’s accountability processes.  It gives the individual the right to restrict processing on a sliding scale based on risk. Article Seven, while not specifying the enforcement and oversight agency, sets up the agency’s powers.  It also creates a safe harbor for compliance through common risk assessment methodologies and enforceable codes of conduct. Articles Eight and Nine require the agency to conduct outreach and education and an effective date. The IAF team asks that you read the legislative model as a demonstration project that illuminates what accountability means in effect.  The IAF also asks that you read it with the maxim, “data should serve people,” in mind.

  • Big Data Set to Transform Privacy, Create New Iteration of Accountability

    By Murray Griffin June 12 –Businesses must rethink their approach to privacy in the era of big data, and they risk a costly backlash if they fail to do so, Peter Cullen, executive strategist for policy innovation at the Information Accountability Foundation, told a June 10 seminar in Hong Kong. Far-reaching changes would required, Cullen said at the seminar on big data from a privacy perspective, which was convened by the Hong Kong Office of the Privacy Commissioner for Personal Data. The task will be made even more challenging by the growing significance in the big data life cycle of connected products, manufactured by companies that until recently have paid little heed to privacy, he said. “We are absolutely living in a data-driven world that is powered by very advanced analytics,” Cullen, who was formerly a general manager and chief privacy strategist at Microsoft Corp., said. Data Smorgasbord. Vast pools of data can now be aggregated from multiple sources, including smartphones and wearable devices, sensors in a wide array of products and online activities, Cullen said. This has led to a smorgasbord of data “that is available for both creating value and unfortunately for also creating risk,” he said. Many manufacturers are focusing on the connectedness and personalization of their products and are consequently discovering that they are “no longer a device or a product company, they are an information company, or at the very least a significant amount of value is being created through the use of information,” Cullen said. As a result, “in a strange way many companies will become data brokers in the future,” he said. That has significant ramifications, partly because these businesses generally don’t have mature privacy systems and are likely to make mistakes, he said. Compounding the problem, regulators and policymakers are struggling to keep up with the changes, he said. That could either result in an undesirable policy vacuum or a counter-productive regulatory response “that may take classes of information or classes of uses of information right off the table, which is a risk for business, and a risk to individuals, and certainly a risk to society,” Cullen said. New Approach: ‘Accountability 2.0.’ The solution is for businesses to think beyond an approach based on compliance with accountability as its bedrock–what he termed “accountability 1.0”–to one that also contemplates the much broader concept of fairness. “Accountability 1.0 is necessary, but not sufficient. The new approach, or “accountability 2.0,” will have to involve rethinking individual participation," he said. “It is super-important that individuals do have meaningful consent and meaningful control where it is appropriate,” Cullen said. “We are going to have to think very innovatively about what that looks like.” Changing Roles. Within organizations, roles may have to be broadened to deal with the new risks. Companies currently have specialist privacy officers and data security officers, but they generally don’t look in a broad way at the risks arising from the way information is used and interpreted across the organization or among organizations, Cullen said. “Nobody is looking at what I will call the ‘horizontal risk’ to individuals as part of the enterprise risk management structure,” he said. That means “there is a whole class of risks” that isn’t “getting the attention it deserves,” he said. The more advanced companies are exploring the concept of data governance boards, which include important decisionmakers and an apply an ethical framework to projects involving big data, Cullen said. Communication of Reworked Approach. If and when companies do succeed in reworking their approaches to managing the risks and opportunities of big data, there is still one more task remaining, explaining what they have done and why they have done it, he said. “I think if the inside process is going to change, its pretty important for business to think about how it is going to tell that story, not just to regulators and policymakers, but to customers,” he said. “This is really a responsibility and also an opportunity I think for businesses to actually act in a way that helps create that future as opposed to waiting for guidance from a regulatory authority or a policymaker,” Cullen said. To contact the reporter on this story: Murray Griffin in Hong Kong at  correspondents@bna.com To contact the editor responsible for this story: Katie W. Johnson at  kjohnson@bna.com Copyright 2015, The Bureau of National Affairs, Inc.

  • Accountability is As Enforceable as Any Other Privacy Management Mechanism

    Accountability has increasingly become the nucleus of effective data protection in a world where the observation of people is critical to how machines and systems work and drives advanced analytics.  Canada was the first country to explicitly capture accountability as part of its privacy law, and therefore actions in Canada have impact beyond Canada.  Now accountability is a European General Data Protection Regulation basic building block and is part of the law in a growing number of jurisdictions.  Accountability requires organisations to be both responsible and answerable.  Technical privacy violations are relatively easy to enforce against, while accountability requires a different oversight approach. If privacy notices are not transparent or consents are not respected, there is tangible evidence of a violation.  However, accountability requires an oversight agent, be it a third party or a regulator, to look at the full range of program elements and determine whether together they lead to responsible data use.  Therefore, accountability failures typically are  more difficult to identify and require more descriptive enforcement to make organisations answerable.  Accountability is enforceable, but it requires new skills that are beginning to emerge at data protection agencies. Recent investigations by the Office of the Privacy Commissioner (“OPC”) led the Commissioner to question whether accountability, as the basis for transborder transfers, can be effectively enforced.  These cases led the OPC to revisit its 2009 guidance on transborder transfers and publish a consultation on whether those transfers should be governed by consent rather than accountability.  Independent of that consultation, the Canadian Ministry of Innovation, Science and Economic Development announced that it will conduct its own consultation leading to recommendations to Parliament to update the Canadian private sector privacy law.  The OPC has since suspended its consultation. Prior to the suspension of the OPC consultation, the IAF drafted comments.  The issue of accountability enforceability has been raised this year in other venues as well .  Therefore, the IAF has determined that its comments pursuant to the suspended OPC consultation would have relevance to the readership of this blog.  Those comments are below.  Dear Sirs and Madams, This letter is in response to the consultation on transborder data flows (the “Consultation”) issued by the Office of the Privacy Commissioner of Canada (the “OPC”).  The Information Accountability Foundation (“IAF”) is a global, non-profit research organization whose mission is to develop and provide sound policy solutions related to the processing of data pertaining to individuals.  The IAF has conducted projects in Canada and has worked on transborder personal data transfer issues in North and South America, Asia, Australia and Europe.  The IAF is responding to the Consultation based on the IAF’s expertise in transborder personal data transfers, not based on an expertise in Canadian law.  The IAF’s comments reflect the views of IAF staff and do not necessarily reflect the views of its board of trustees or its funders. Background Under the Personal Information Protection and Electronic Documents Act (“PIPEDA”), any collection, use and disclosure of personal information requires consent, unless an exception to the consent requirement applies.  However, when personal information is shared with a third party for processing, it is IAF’s understanding PIPEDA treats the sharing as a “transfer” and not a “disclosure”.  In its 2009 Guidelines on Processing Personal Data Across Borders (the “2009 Guidelines”), the OPC stated that a transfer of personal information for processing, including a cross-border transfer, is a “use” of personal information and not a “disclosure”.  The OPC’s view was that, as long as the personal information was being processed for the purpose for which it was originally collected, additional consent for the transfer to the processor was not required. In the Consultation, the OPC states that in the absence of an applicable exception, the OPC’s view now is that transfers for processing, including cross border transfers, require consent as they involve the disclosure of personal information from one organization to another.   It is the IAF’s view that the OPC’s updated policy position is not well founded for three reasons:  (1) Given the nature of data flows, accountability chains not consent are the most effective form of governance; (2) the 2009 Guidelines have been influential internationally; (3) the issue of the OPC’s enforcement powers should be separated from the issue of whether accountability as a basis for data flows can be effectively enforced. Accountability chains are more effective as a governance mechanism, given the nature of data flows, than consent The interactive nature of the digital age is dependent on data flowing to where ever they most efficiently and effectively make things work.  Data flows will change as the ecosystem changes.  The number of individual participants in the ecosystem may grow as those ecosystems change.  For example, the data flows necessary to keep smart cards smart and personalized medicine personal may change, but the underpinning of all effective governance is responsible and accountable data stewardship.  This means that even the most informed consent will be dated by the fully fluid operations necessary for things to work.  The only way to achieve sound governance of data transfers is through a concrete framework of accountability chains.  Organizations are responsible for passing on and being accountable for the conditions and obligations associated with personal data as they pass to other participants.  Organizations are responsible for assessing the risks associated with the chains and mitigating those risks.  The IAF has been exploring the accountability chain concept since the IAF was founded in 2013.  The 2009 Guidance has been the guiding light for both the theory and practice associated with trustworthy data flows.  Organizations necessary to make smart cars avoid other smart cars and personalized medicine be personal are not only large, but they also are ever changing.  A detailed description of where data will flow will change even before the transmission of that transparency disclosure takes place.  A detailed consent form based on yesterday’s transparency report would be meaningless in terms of controlling the flow of information to others.  Policies whose purpose is to disrupt appropriate data flows, be they data localization or consent, are pushing against the nature of the ways things work.  For that reason, the IAF has been in favor of accountability-based governance for movement of data from one party to another, including to third parties for processing. Also, the question of whether individuals truly have the power to enforce sound governance based on the consents they make needs to be addressed  There are certainly instances in which individuals have that power, but it is not clear that controlling the flow of personal data, particularly to third parties necessary for the processing to take place, is an effective governance mechanism. The OPC guidance on accountability as it relates to third parties does not differentiate between third parties in Canada and those in other locations.  So, it seems that guidance that labels a sharing with a third party as a disclosure would have impact beyond transfers that are transborder.   In fact, it is likely that the guidance would have to cover data sharing with all third parties, including those necessary to make the modern digital ecosystem work.   As to practicality, the addition of information about the transfer of personal information outside Canada to the flood of information required by the consent guidance, much of it out-of-date, would exacerbate an already overloaded barrage of decisions forced on individuals.  This addition would obscure the data necessary for the decisions that are truly important for individuals to make, thus making consent an even less effective governance mechanism for data flows than it already is. The interactive nature of the digital age demonstrates the risk of confusing consent and transparency.  Consent is only really meaningful in situations where the individual has the ability to withhold or withdraw consent.  For example, an individual knows through transparency that an organization transfers personal data outside Canada and decides not to do business with that organization or decides to withhold/withdraw consent to the transfer of the data.  As discussed below, failing to consent to the transfer of personal data does not solve the governance issue the OPC seeks to solve.  The limitations on the individual’s ability to enforce consent also point out the individual’s total inability to enforce accountability.  That responsibility lies with the organization, thus emphasizing again the importance of accountability chains. The 2009 Guidance has been influential internationally The 2009 Guidance on transborder data flows based on organizations remaining accountable has been lauded internationally as the most reasonable approach to transfers.  If something goes wrong, there is an answerable party, not some distant entity.  European Binding Corporate Rules and APEC Cross Border Privacy Rules are legal adaptions to provide what appears to be already contained in Canadian privacy law and guidance.  The IAF has looked at transborder data flows in many different countries.  The IAF has worked with policymakers and businesses in many jurisdictions on how to protect individuals and on how individuals get the benefits of a digital age.  The 2009 Guidance has been the soundest of the policies reviewed by the IAF.   Others will speak to the analysis conducted to inform the 2009 Guidance, but the IAF can speak to the general effectiveness of the 2009 Guidance.   The issue of OPC enforcement powers should be separate from the issue of whether accountability as a basis for data flows can be effectively enforced The Consultation has been informed by two recent enforcement cases involving data breaches in Canada.  The first relates to a service Equifax Canada provided through Equifax U.S. where the OPC found that Equifax Canada failed to police Equifax U.S., leading to an accountability failure.  The second involved Facebook and its failure to fully police app providers, leading to the Cambridge Analytica violation.  The OPC has raised the questions of whether accountability has been an effective mechanism to prevent the violations and whether the OPC’s authority to enforce the breach of the law has been effective.  As discussed above, the OPC also has raised the question of whether another principle, informed consent, would have been more effective in preventing the incidents from taking place or providing the OPC with greater enforcement authority.  How would consent have addressed the problem the OPC is trying to solve?  If individuals had consented to the transfer of personal data, those data breaches still would have happened. Individual consent will not prevent data breaches from occurring.   The IAF is not in a position to speak to the level of authority that the OPC currently has to take actions against PIPEDA violators.  The OPC has the same authority to enforce both consent and accountability.  Whether the level of investigative and enforcement powers is sufficient is a matter that may eventually be debated by the Canadian Parliament.  The Department of Innovation, Science and Economic Development in Canada in its May 22, 2019 Proposals to modernize PIPEDA proposes incentivizing compliance with PIPEDA by expanding the OPC existing powers, including its fining power. However, the OPC’s power to enforce is not related to the principle at issue, governance of data transfers.  As discussed above, accountability has been an effective basis for data flows.  The fact that two companies were found to have violated the accountability principle does not mean that the accountability principle is ineffective.  Laws in every sector are violated, and enforcement is used to create incentives for compliance.  Sound policy should not be changed because of enforcement challenges. The IAF respectfully suggests the OPC separate the issue of accountability as the basis for personal data movement from the question of whether OPC enforcement powers under Canadian law are adequate.

  • The Glacial Movement of Privacy and the Implications to Accountability

    As we see more and more draft privacy law being introduced in Congress and in state legislatures, an increasing number of enforcement actions in Europe, and more media interest in perceived privacy abuses by big tech companies, it may seem strange to equate privacy to the consistent, continuous movement of glaciers.  The pace today seems very fast, and glaciers are equated with slow and steady movement.  Yet when a snap shot is taken of an iceberg cleaving off a glacier, the moment is dramatic and almost violent.  So, equating privacy to a glacier seems very relevant today. Glaciers slowly move but are always moving. And perhaps more significantly, they alter the land they pass over in permanent ways. Glaciers change with the environment just as privacy changes with technology shifts. Today information privacy with sensor based observed data is more different than it was when the ancients invented writing six thousand years ago; just as glacial movement is different with global warming than it was at the end of the last ice age.  Like the movement of a glacier, a slow but immutable movement to rethinking privacy is underway, punctuated by the dramatic events such as Cambridge Analytica or the Equifax breach. In a  recent blog Foundational Issues , the IAF posited that a key challenge to the debate going on with respect to U.S. privacy legislation is what do we mean when we say privacy. Today much of the public policy debate centers or is framed around concerns about the acceleration of data and their use. Linking solutions to the actual privacy interests in play tends to yield the best results. Yet a key challenge is that the interests as they exist today are still being aligned with the solutions adopted decades ago.  Over the years, attitudes have changed with respect to privacy, much of which has been built around issues like data breaches, behavioral targeted advertising, tracking and the sale of data.  These concerns have resulted in public policy changes like the GDPR and more recently the California Consumer Privacy Act. Much of these legislative changes have either granted new rights to consumers or strengthened existing ones. More interesting, these policy changes seem to tie to an original and increasingly challenged principle that privacy is the ability of individuals to control their personal information. This explanation of privacy originally introduced by Alan Westin also aligns with the notion of privacy as an issue of autonomy and perhaps in looking back helped drive the reliance on consent in many parts of the world and the notice and choice regime in the U.S. Over this period, we have grown conformable with the seeming dichotomy of the way consumers describe their attitudinal concerns around privacy and the way they behave. We have come to understand both ends of this spectrum can be true. What is interesting to note is that while many attitudinal studies have focused on privacy as an area of concern or importance, a recent study by Harris Insights & Analytics (2018 Societal ROI) asked the question “How important is it to you, personally, that companies work to truly make a positive difference?” That privacy of data out ranks other issues that we typically associate as having impact people’s lives may be surprising to some.  Thus, the analogy of privacy to a glacier appears to be even more relevant. In addition to expectations on companies, potentially more change,  is coming. The New York Times Privacy Project may be an interesting bellwether. It is difficult to remember a time when a major publication has so prominently and at such length explored issues round privacy. Continuing with the glacier analogy, a real potential does exist that people’s attitudes will change as a result of this expose.  Yes, this is a front-page exploratory series, but it is also giving new language to notions and issues bubbling around the numerous concerns being expressed. Some will correctly note that, relatively speaking, only a small percentage of consumers read the NY Times. True, but this is perhaps less relevant. Almost all politicians, their staffs and most people involved in creating public policy do read the NY Times. By extension it serves as a powerful voice and influencer. One of the most interesting articles in the New York Times Privacy Project was by Charlie Warzol, “Privacy is Too Big to Understand”. In this article, Mr. Warzol notes “Privacy” is an impoverished word, too small a word to describe the many concerns in people’s minds such as data mining, transmission, storing, buying, selling, use and misuse of personal information. He goes on to say “In this way, the concept of digital privacy shares similarities with weighty crises like climate change. Both are what the theorist Timothy Morton calls “ hyperobjects ,” a concept so all-encompassing that it is impossible to adequately describe”. Perhaps the most significant part of the article was the quote of Matt Cagle, a technology and civil liberties attorney at the ACLU, who said “You are losing control over your life. When technology governs so many aspects of our lives — and when that technology is powered by the exploitation of our data — privacy isn’t just about knowing your secrets, it’s about autonomy”. OK, we are back to autonomy. But Cagle went to say, “Privacy is really about being able to define for ourselves who we are for the world and on our own terms ”. It is the last part that notes a change.  As Warzol noted, “At its heart, privacy is about how that data is used to take away our control ”. The significance of this article is the demonstrative tonal shift in terms of how privacy is talked about. Will this tonal shift and more assertive way to talk about privacy result in a change in both attitudes and behavior? Will this influence the glacial movement of change in public policy, not just in the U.S. but globally? Time will tell. But it is clear that Accountability based on Data Stewardship will be key for organizations. This approach, particularly for those seeking to create value and innovation through data, will not only position these organizations for a shift in public policy but, as importantly, for any shift in their customers’ expectations, attitudes and, maybe, behavior. Perhaps, also a measure of inoculation to the glacial shift that is underway.

  • Stewards Not Fiduciaries

    The FTC held its hearing on its Approach to Consumer Privacy on April 9 and 10 In Washington DC.  At that Hearing, the questions discussed included: what are the existing and emerging legal frameworks for privacy protection, and if the U.S. were to enact federal privacy legislation, what should such legislation look like?  In response to these questions and others, “information fiduciaries” were suggested as a way to protect consumers from harms resulting from data collection, sharing and use.  The IAF believes that conceptually “information fiduciaries” is too limiting and instead the  discussion should be about “data stewards.” Jack Balkin’s 2016 law review article is frequently referenced as the authority on the concept of “information fiduciaries.”  Because doctors, lawyers and accountants have special relationships of trust and confidence with their clients, often because of their collection, analysis, use and disclosure of information, Balkin describes these individuals as “information fiduciaries”.  According to Black’s Law Dictionary , a fiduciary is invested with rights and powers to be exercised for the benefit of another person. Recognizing that applying the concept of “information fiduciaries” to businesses that use personal information might be too broad because businesses then could not make any money at all from personal data that might be used to the individual’s disadvantage, Balkin recommends that businesses should be thought of as “special-purpose information fiduciaries.”  When thought of this way, the nature of their services should be guided by what kind of duties it is reasonable to impose on them; the kinds of duties information fiduciaries should have, according to Balkin, should be connected to the kinds of services they provide. Listening to the Opening Remarks of FTC Chairman Simons last week, it became apparent that the concepts of “information fiduciaries” and “special-purpose information fiduciaries” should not apply to businesses that use personal data to benefit both themselves and individuals (and maybe society as well).   In his speech , Chairman Simons asked: “What approach will protect consumers’ privacy interests while fostering the innovation and competition that has brought us so many benefits?”  What, exactly, are the harms that we are trying to address? And what are the countervailing considerations, like the effect on innovation and competition?” As Chairman Simons said, we should be finding ways for businesses to both protect consumers’ privacy interests and foster innovation and competition.  Redefining the legal concept of fiduciary, from one whose duty is to protect the interests of the individual to one that can protect both the interests of the business and the individual, is not the answer.  Rather than torturing the concept of “fiduciary” to fit a situation it definitionally does not cover, other appropriate governance concepts should be considered.  All those trying to determine what kind of privacy legislation should be enacted in the U.S. should look to data stewardship, a concept essential to the governance concept of Accountability. Accountability was first articulated by the 1980 OECD Privacy Guidelines as principle 8 which states:  “A data controller should be accountable for complying with measures which give effect to the principles stated above.” In 2009, a group known as the Global Accountability Dialogue met in Dublin to explore how the OECD accountability principle might be used to create confidence in data transfers from one country to another.  This group, comprised of diverse stakeholders, reached a consensus on essential elements to guide organizational accountability in the Galway Paper .  In the Galway Paper, the concept of data stewardship was discussed. “Accountability is the obligation to act as a responsible steward of the personal data of others, to take responsibility for the protection and appropriate use of that information beyond mere legal requirements, and to be accountable for any misuse of that information.”  Thus, under Accountability as it was conceived in 2009, data users must take responsibility for the data they use.  This description of data stewardship is data use focused.  Data stewards are custodians for data, and any obligations that run with the data are recognized. As data use has expanded, the concept of Accountability has evolved too into Enhanced Accountability which takes data stewardship to a new level.  With advanced analytics, the systems can make decisions that impact people.  For example, credit scores assess the risk related to loans, insurance, mobile phones and many other services, and often these credit decisions are made without any human involvement. The systems make decisions, based on objectives set by humans, but the direct human accountability is absent.  To regain that accountability, it will be necessary to depend on people to build accountable governance into the objectives for the systems.  Therefore, it is necessary that stewards make decisions that consider the interests of external stakeholders, the parties impacted by the processing.  By doing so, the role of the data steward has moved beyond custodian of the data and the obligations that come with the data to one that ascertains the outcomes are legal, fair and just to the various stakeholders.  Future data stewards must be stakeholder interest focused.  This concept of understanding stakeholders, both the business itself and the individuals and even society if appropriate and their interests, is foundational to Enhanced Accountability. Requiring businesses to be data stewards if they want to use personal data for their own benefit and for the benefit of the individuals from whom the personal data are provided, observed, derived and inferred is a more easily understood and applied concept.  It also has the benefit of having broader application; it is not limited to businesses who hold themselves out to the public as privacy-respecting organizations in order to gain the trust of those who use them, when they give individuals reason to believe that they will not disclose or misuse their personal information, and when individuals reasonably believe that they will not disclose or misuse their personal information, as Balkin limits the concept of “special-purpose information fiduciaries.”  In order to use data from sensors, employ artificial intelligence and machine learning, create inferred data, or make probability-based decisions on people, businesses need to be data stewards not “information fiduciaries” or “special-purpose information fiduciaries.”

  • Foundational Issues for New Privacy Law in the United States

    A federal privacy law in the U.S. seems increasingly likely.  When?  It is not yet clear.  However, we can say with much certainty that in the coming months we will see many draft laws that will join the ones we have already seen from Senators, members of Congress, Intel, CDT and others.  The current series in The New York Times, The Privacy Project , helps illustrate why legislation is needed.  However, it also illustrates how complex that legislation will be.  This is one of the reasons why the IAF will issue its own model legislation late this Spring.  Another reason is that while fair processing principles are a very useful way to describe a framework, in order for legislators to translate a framework into legislation, the principles need to be couched in the language of legislation. A key question is what should legislation seek to provide?  Legislation should strive to make us safer and create guideposts for digital innovation.  So where might one start when reviewing privacy legislation?  At the IAF, we believe a good place to start are key foundational issues.  The day after the Federal Trade Commission privacy hearings on April 9 and 10, the IAF held a small table discussion on the topic of foundational issues.  In many ways, the IAF discussion was informed by the FTC hearings.  Questions such as the current effectiveness of consent, the utility associated with accountability, and the role the FTC might play in any new legislative framework were raised at the hearings.  The IAF small table discussion included a diverse group of stakeholders.  It was held under Chatham House rules, and we are still reviewing what we learned.  These issues will be further explored at the IAPP Global Privacy Summit May 3 in a session entitled “Not Grand’s Privacy Law: Fundamental Questions for Comprehensive U.S. Regime.”   However, we believe it useful to share the foundational issues we raised with the broader privacy community. When we say privacy, what do we mean?  Rather than defining information privacy, we tend to frame it based on concerns.  From a functional perspective, those concerns may be grouped based on intrusion into seclusion, autonomy or control over the information that defines us or the fair processing of data that pertains to us.  Linking solutions to the privacy interest in play tends to yield the best results.  For example, consent (autonomy) will not solve discrimination problems linked to insufficient data to train AI systems (fair processing).  How should we think about manipulation across this spectrum? Which interests are we attempting to resolve with new legislation?  Are we creating comprehensive privacy protection on amending consumer protections?  Section 5 of the FTC Act and most of U.S. sector specific laws have confronted privacy as a consumer protection.  Restrictions on government data use have been based on constitutional rights.  Are we attempting to amend or add to those protections, or are we looking towards something more focused on a broad range of individual interests related to a digital economy and society?  This question begins to define the structure of a new law and more importantly how it might be overseen and enforced. Do we intend the law to be administered through enforcement only, or do we expect a federal entity to oversee implementation of complex legal provisions?  Comprehensive privacy laws tend to require public authorities to provide both some level of oversight as well as enforcement when the law is broken.  Provisions such as unfair and deceptive practices have tended to be driven by enforcement of abuses rather than ongoing oversight.  If one has a comprehensive law that requires privacy by design, risk assessments that drive corporate decision making, and transparency around how one might fairly conduct AI and other advanced analytics, one is pushing towards a model that requires oversight.  That raises questions of what type of oversight?  If one houses this function at an existing law enforcement agency, does this change the nature of that agency? How much will it cost to oversee comprehensive legislation and are we prepared to pay that price?  The UK Information Commissioner has a staff of 500, with at least half dedicated to privacy.  Ireland has a staff of over a 100 dedicated to privacy.  The FTC’s privacy staff is much smaller.  What is a proportionate staff level for a U.S. agency? What should its role and staffing mix look like? If that staff were added to an existing agency, would it change the nature of that agency? How do we reconcile the U.S. free expression values, that have included the freedom to observe in the public forum and the ability to do research with data with limited restrictions, with the concerns that drive legislation, a sense we are over-observed and algorithmic decisions have been unfair?  Some attribute the U.S. competitive advantage to the ability of American companies to freely think about what data might predict and how those insights might be used to drive commerce.  The U.S. likely wants to maintain this competitive advantage.  So how do we reconcile the interests of knowledge driven value creation and a desire for more trustworthiness?  How do we not stifle new data driven knowledge creation?  Most international data protection and privacy laws require a permission before data is processed in any form or fashion. Repurposing data for research, whether commercial or academic, must be permitted by the law. Today, in the U.S., for the most part, using data to create new insights is not regulated by the law.  Organizations are free to think and learn with data.  It is only when the data is used that explicit sector specific laws, or general protections against unfair and deceptive practices, kick in.  Some of the applications coming from analytics have increasingly been seen as harmful.  How do we protect data driven knowledge creation and be sensitive to provisions that, whether intentional or not, make thinking with data much more difficult?  IAF’s recent blog addressed many of these issues. How do we prevent advanced analytics and AI from hiding prohibited discrimination?  As a society, we have decided that certain factors may not be used in decision making.  For example, one’s gender, age and race may not be used in making a credit decision, even if those factors are predictive of performance.  AI and big data both may obscure the factors that lead to decision making.  How do we structure law that is protective of inappropriate discrimination without creating restrictions that are overly prescriptive? Should new legislation primarily drive legal liability avoidance or proactive accountable behavior?  Privacy law may be structured as a list of prohibitions, a description of processes and objectives, or a combination of the two.  While not a law, the Canadian regulators’ guidance on using a comprehensive privacy management program to drive accountability is an example of regulatory guidance that describes objectives and processes to achieve those objectives.  Since data use is dynamic, lists of prohibited activities leads to legal structures that are often dated when they go into effect.  Some have suggested an approach that gets to legal certainty by creating white lists and black lists of activities.  Others try to square the circle by governing fair processing through data subject sovereignty.  And still others are in favor of specifying goals and demonstrable processes.  What approach makes sense in a dynamic environment? Is the subject in question personal data or impactful data?  The definition of personal data is often used to define the domain for privacy law.  As data not captured by the definition become consequential in decisions made, regulators often look for ways to capture that non-personal data as personal data.  Some have suggested that the jurisdictional boundaries should be based on data impactful on a distinct individual.  Such a concept is more dynamic but less certain.  What data should a new privacy law cover? Which organizations should be subject to rigorous requirements?  Rigorous accountability requirements are expensive and maybe not all companies should have to create such rigorous programs.  But what is the criteria to determine who is in or out?  Small companies with ten employees may create real consequences for people, while some large organization make relatively simple use of data.  How does one design the criteria to determine what types of companies the more rigorous accountability provisions should apply to? Are there questions we missed?  Please let us know.  This discussion will continue over the coming months.  Our intent is to move beyond issues to legislative building blocks. This will be the focus of the dialogue at our June 26 Summit. Stay tuned for details on this event.

  • Data Driven Knowledge Creation Needs to be Protected

    Our collective desire to have a space where we are free from observation is increasingly under pressure from modern technology, and our confidence that data that pertains to us will be used fairly is in a deficit mode. At the same time, data are being used to create new knowledge by gaining insights that would not be possible without advanced analytics. There is no question that current privacy law, even relatively newer versions, must be thought about in this continuum. As laws get updated or created, they must not be done so in a manner that kills the golden goose of the digital age, our ability to “think with data.”  Data driven knowledge creation, thinking with data, learning with data, no matter how one phrases it, is, as some suggest, how America gained a competitive advantage in the digital world.  Data driven knowledge creation has improved processes from individualized medicine to congestion relief.  It has also generated significant economic growth.  This driver of the digital age evolved with technology but was not part of any central economic plan until now as many of the world’s economies see digital innovation being tied to economic and societal growth.   In the 1980’s a few statisticians began working with the credit bureaus to discover whether one could model the likelihood that a person with a particular credit history would go bankrupt.  These statisticians used old credit histories where outcomes were known.  The outcome of that research was the credit score that plays a significant role in assessing the risk related to loans, insurance, mobile phones, and many other services.  The scoring models are developed based on updating past histories.  Those models then use current data to predict the likelihood that a credit history will predict a specific event, like bankruptcy.  Decisions are then made on whether to grant credit and at what price.  Credit scoring is a classic example of thinking with data and acting with data. Credit scoring was the beginning of today’s world where predictive sciences are used to understand human behavior beyond what is intuitive.  New knowledge is good.  Knowledge can range from a cure for cancer to personalization that is less intrusive.  Knowledge is created by university scientists and by data scientists inside organizations monitoring clicks on a website.  A significant portion of knowledge creation comes from advanced analytics that makes use of data that was collected or created for other explicit purposes.  Knowledge creation is not the same as automated decision making, even though it could lead to that outcome.  Terms like profiling conflate the two concepts. The knowledge created from analytics is neither good nor bad.  However, the use of that knowledge can create great individual and societal benefits or might lead to bad consequences for both people and society.  The key point is the consequences typically come from the misuse of the knowledge, not the knowledge itself. Laws need to reflect this distinction. The IAF is not suggesting that data driven knowledge creation is perfect.  It is dependent on the  availability of the data, the quality of the data, and the quality of those interpreting the insights.  But, for the most part, knowledge creation is better than the alternative. The concern is new privacy laws, particularly those emerging in the U.S., might just stifle new data driven knowledge creation as an unintended consequence in their quest to protect individuals against the application of knowledge.   Most international data protection and privacy laws require a permission before data is processed in any form or fashion.  Repurposing data for research, whether commercial or academic, must be permitted by the law. Today, in the U.S., for the most part, using data to create new insights is not regulated by the law.  Organizations are free to think and learn with data.  It is only when the data is used that explicit sector specific laws, or general protections against unfair and deceptive practices, kick in.  Some of the applications coming from analytics have increasingly been seen as harmful.  The collecting, mining and using of personal data by technology companies caused Alastair Mactaggart to start the ballot initiative that resulted in the California Consumer Privacy Act.    The IAF has been a supporter of new and updated legal tools, including a new comprehensive privacy law in the U. S. But as those laws are considered, it should be recognized that the freedom to think and learn with data has been a driver of many desirable outcomes, including access to new products and service and faster economic growth. There are obligations that all organizations must bear when they both create knowledge with data and apply this knowledge. However, generally, the consequences to individuals are less when knowledge creation is the objective. By extension, the obligations should be different when knowledge is applied,  where it can have a more direct consequence on an individual.  Laws should reflect this. When the IAF issues its legislative model this Summer, there will be specific provisions that will explicitly make data driven knowledge creation permissible with appropriate accountability safeguards in place.  As the States and Congress consider new laws, the IAF suggests they consider protecting data driven knowledge creation and look for and adjust provisions that, whether intentional or not, make thinking with data much more difficult.  Please let us know what you think.

  • Which Accountability Category Do You Fit In?

    For the past month, IAF staff has been working to turn the IAF data protection principles released last Summer into legislative language. It has been our observation that “legislative text” is the more comfortable lingua franca for policy makers in the U.S. Creating legislation from principles is a very humbling experience. The principles seem so straight forward, particularly when they have been developed over the greater part of a decade in one IAF project after another. The IAF team can catalogue the concepts in the principles to the projects where they were tested. But translating the concepts into legal burdens for organizations tying to multiple stakeholder interests or regulators charged with keeping them within legal bounds, it is not so simple. It quickly became apparent that what seems clear and what the team has taken for granted is mysterious for others. The license to break glass through more complex data processing and data use involving, for example, advanced analytics is an agreement to have processes in place so the interests of all stakeholders, as reflected in both positive and negative consequences, are taken into consideration. Moreover, the processes must be conducted with honesty and competence. But when should an organization carry that significant burden? Accountability 1.0 is for organizations of all sizes and complexities. It requires organizations to have policies that link to the law, mechanisms to put them in place, security safeguards, internal oversight, and documentation for basic processes. Accountability 1.0 is scalable to organizations of all sizes and complexities. Tools and generic templates may be built by third party vendors, trade associations, and even regulators. Accountability 2.0 is for the glass breakers, such as those who use data from sensors, employ artificial intelligence and machine learning, create inferred data, or make probability-based decisions on people. Demonstratable accountability 2.0 is the license for breaking glass. Therefore, accountability 2.0 requires policies, implementation rigor (including assessments), internal reviews, individual recourse and a different rigor of oversight. For the glass breakers, accountability 2.0 is the cost of doing business with data that is very impactful in creating positive outcomes for individuals and society (and shareholders) and can result in the increased loss of private space and the potential for more negative consequences. While the components of an accountability 2.0 based program are the same from organization to organization, they need to be customized to be effective. So how is a law written that differentiates the data glass breakers from the organizations that are not and will not be data glass breakers? How are markets split into those upon which it is and is not reasonable to bear the regulatory burden? For the IAF, it has not been so hard to discuss these distinctions conceptually but translating these concepts into legislative framed rules has been much more difficult. To illustrate, a few scenarios are described: Global Fashion Chain Company has stores on every continent. The brand is high fashion, from designer runway to store in weeks, with inventory changing rapidly. It drives traffic by understanding its market and creating an experience that gets its customers to its chain’s stores to see what is new. Data is aggregate, and sales are not targeted to individuals. Data is used to complete transactions and is not used to target individuals. The fashion chain may break glass but not with very personal data. It is a large company, consumer oriented but not a data glass breaker. It fits into accountability 1.0. Digital and Brick and Mortar Retail This retailer has customers who shop both in its stores and online. Often the customers will shop the stores and then purchase online, and often they shop in reverse. The company tracks what its customers shop for and buy. It links the online and offline experience in identifying the customers who have shopped online when they walk into the store with their phone. It analyzes all the data related to the consumers and augments the analysis with purchased data. It uses a service to link consumers with their phone, PC and tablet, all their touch points. It uses analytics to set prices and works with suppliers to predict new products that will sell. It is a data innovator and fits into accountability 2.0 Scientific Research Driven Pharmaceutical Company This company’s stated mission is to make money by using the scientific method to create new drug-based cures. The company collects most of its personal data about patients through clinical researchers. The clinical research is governed by strict oversight rules developed by governments but implemented through ethics boards. The company will increasingly use data from clinical and medical outcomes environments to create insights to improve the research method. But the company will only use the data in closely related activities. It is a large company with data used according to protocols that evolve slowly governed by a tradition of process oversight and regulation. It fits into accountability 1.0. Medical Device and Medicine Delivery Company This company is looking to innovate by creating delivery methods that are autonomous that will augment linked medical devices. To understand the risks related to such delivery methods, it has sensor data to enhance its knowledge of how people behave. This approach requires data from many sources to be linked together. To link the data together, it both purchases data and uses vendors. The data is processed in a de-identified manner, but the keys are readily available to link new data to existing data sets. The insights are transformed into inferred data. It is a data trend setter and fits into accountability 2.0. Data Enhancement Startup The company currently only has ten employees and almost no revenue. Most of the employees are data scientists and engineers proficient in data hygiene. The company is working with potential clients to better understand how to build advanced predictive platforms. The company does not source data directly from individuals. However, it is using the data to perfect its own systems and is creating inferred data from its insights. It fits into accountability 2.0. On one hand we have a global fashion brand that breaks glass with the way it merchandises, but it does not use data that is impactful on individuals. On the other hand, we have a tiny company, with no data, that is generating inferred data that may well be impactful on individuals. Thus, it is clearly not organization size that defines the split. Also, the industry does not define who is a data trend setter. Also, the amount of data does not define who fits into accountability 2.0. Rather, who will be subject to additional obligations is something very different. Placing a bright-line label on that cut point is an interesting challenge. It is a challenge we are working on today. There is a challenge for you as well. You will live with new legislation for a generation. If you were writing the break point between obligations and rigorous obligations, what bright-line criteria would you use? And where would your company fit today, two years from now, and by 2025? We are very interested in your thoughts. If you are an IAF member company, please place the June 26 summit on your calendar.

  • Peter Cullen issues New IAF Blog- Evolving Ethical Data Impact Assessments

    Last fall, the Information Accountability Foundation (IAF) completed work, commissioned by the Office of the Hong Kong Privacy Commissioner for Personal Data, that explored what Ethical Data Stewardship would consist of  and what an Ethical Data Impact Assessment would look like. As Hong Kong Commissioner Wong so aptly put, In order to encourage innovation in various global regions, digital information strategies are being adopted that recognize that the Internet and digital technologies are transforming the world, that the needs of business, government, and the general public impact the competitiveness of their country’s economy, and that the protection of personal data and fair data processing are needed for the development of Internet-based economies. This IAF work and in particular, the model Ethical Data Impact Assessment (EDIA), was presented at the 40th International Privacy and Data Protection Commissioners conference, the theme of which was data ethics. IAF’s goal was to help translate sound, implementable business processes into a framework that would enable the economic benefits of technology driven data innovation within a foundation of digital accountability. In other words, ethical data stewardship. Ultimately, the IAF sees this framework being a key part of Fourth Wave Privacy Legislation . Specifically, EDIA’s are demonstrable ways business users of data, data stewards, can achieve ethical or fair data processing. Since its initial release, we have found enhancements, especially technology related ones, that will facilitate an EDIA’s ability to address a broader range of issues. While other assessment frameworks, such as the Markkula Center for Applied Ethics, have developed toolkits, (see An Ethical Toolkit for Engineering/Design Practice ), they are heavily focused on technology and tech design, whereas the IAF’s EDIA was heavily focused on data and data analytics. Given the role technology plays in data driven models, it seemed appropriate to find a marriage between the two. A revised model EDIA and oversight process can be found here . Additional enhancements to the IAF model EDIA are expected as experience with it grows. In addition, as this is a “model” assessment, the IAF fully expects it will be adapted to fit specific organizational needs. IAF anticipates the demand for “ethical data processing,” and by extension the need for an EDIA as part of an accountable data stewardship program will grow due to several inter-related drivers: First, data is playing a larger role in powering economic growth. Data is the fundamental “natural resource” of the digital economy and is at the heart of the innovation economy.  Digital technologies transform data into insights that allow business to be more innovative with products and services and compete more effectively in the global market. If done appropriately, these developments provide benefits to all stakeholders including society. However, as Commissioner Wong and others have suggested, this growth cannot be fully optimized without trust, and trust is under assault. Innovative technologies, enhanced data analytics capabilities, and new applications of data are creating some unexpected and surprising insights and new business models. The result is questions relating to data use, concerns regarding the “balance of power” and current “governance” models. Second, as data impacting activities get more complex, the questions of consequential impacts to individuals (including fairness) increase, and the need to address the lack of trust in organizations being ethical, fair or trustworthy stewards of data are enhanced.  These questions have led to growing calls for an ethical approach to data processing.  These calls are why privacy and privacy compliance programs alone will not be fully satisfactory even as new laws are created or as existing laws evolve.  Absent clarity, many data protection authorities will believe they have all the authority they need to fill the vacuum to enforce ethical or fair behavior. In fact, some have stated this already and believe the General Data Protection Regulation (GDPR) provides this. Third, the reality of “data matters more” starts to more clearly expose the differences between privacy and data protection.  European Union law links autonomy and data protection to different rights enshrined in the Charter of Fundamental Human Rights of the European Union .  Privacy tracks to Article 7, Respect for Private and Family Life, and data protection tracks to Article 8, Protection of Personal Data, which requires consent or some other legal basis to process.  The GDPR takes a very broad view of Article 8 and interprets it to protect the fundamental rights and freedoms of individuals. This difference is not new, but it is more relevant with the increasingly larger role data is playing in our daily lives. Fourth, ethical or fair data processing requires a data stewardship approach to governance which is different than compliance. Compliance is responding to externally created requirements, often prescriptive. Data stewardship is the proactive balancing of stakeholder interests and risks.  This balancing is aligned to the goal of the GDPR to protect the fundamental rights and freedoms of the individual and to the requirement of “fair processing” in the GDPR. “Balancing” is a consequences evaluation exercise considering both the beneficial impact of data, the risks, and the mitigation of those risks.  As with almost all risk assessments processes, balancing is not a check-box compliance process.  An EDIA is a process that looks at the full range of rights and interests of all parties in a data processing activity to achieve an outcome that assists an organization in looking at the rights and interests impacted by the data collection, use and disclosure in data-driven activities. The goal of the EDIA is to encourage information, communication and technology (ICT) innovation and competition by demonstrating that an organization has considered the interests of all parties before deciding to pursue an advanced data-processing activity.  Fifth, EDIAs do not just increase trust. As business is shifting its mix of value creation to be more centered on data, the question they face expand from “can I use data” to “should I use data”, and how should I do so. This decision-making expansion is magnified by the lack of clarity as to how current laws will be interpreted or enforced. For example, despite the impact the GDPR has had on increasing compliance requirements, before the ink was even dry on the GDPR, there were questions relating to whether these compliance requirements even worked relative to technologies like Artificial Intelligence. Increasingly, the internal result is decision-making friction.  EDIA’s do not answer the question “should I use the data in this way”, but they do organize the set of facts to allow for a fair or ethical assessment to be mad, on that balances the interests of all stakeholders. In conclusion, from a public policy perspective, EDIA’s are a core part of what data stewardship accountability is currently and will be in the future. However, leading organizations are implementing them as they are demonstrable and defensible against evolving regulatory expectations. They also increase the optimization of data and value creation by enabling internal decision-making and, by extension, reducing reticence creating friction. This outcome benefits everyone.

  • IAF Issues “Trusted Digital Transformation, Considerations for Canadian Public Policy.”

    Many consider Canadian privacy law as the pragmatic mid-point between European omnibus rights driven data protection and U.S. sectoral privacy laws balanced against free expression and risk of harm.  The Personal Information Protection and Electronic Documents Act (PIPEDA) is probably the cleanest translation of the OECD Guidelines into law and by extension is a principles-based law.  Schedule One of PIPEDA, which contains the principles, was completed in 1996 – 23 years ago.  Since then a constant revolution has taken place in information communications and technologies (ICT).   Today, many tech-enabled and data driven models are providing real value to consumers, society and business.  PIPEDA’s preamble makes clear that its purpose is to facilitate both individual protection and broad data use.  However, the broad use of data is putting increasing strain on how the principles, developed and put in law and practice two decades ago, are meeting the goals of protection. The revolution in data and its technology enabled data use makes a compelling case for the need to evolve how the core principles are put into effect.  When the goal of international interoperability is added, and as other economies update their own public policy models, it is clear changes to Canadian public policy will be made. The IAF today issued a Canadian discussion paper entitled “Trusted Digital Transformation, Considerations for Canadian Public Policy.”  The paper makes five policy recommendations and suggests a framework for implementing core Canadian values.  The discussion paper is intended to accompany the National dialogue that is associated with this sort of public policy change.  These five recommendations are: Data pertaining to individuals should be used to create real value for identified stakeholders in a balanced, fair fashion that serves individuals, society and private organizations.  This balancing should take place in all sectors, and the risk of not using data should be as important a consideration as the risk of negative consequences. Individuals have clear rights related to data and its uses, and those rights should be explicit and actionable as well as theory. Accountability requires organizations to be reasonable and responsible in what they do with data pertaining to individuals and to be answerable for how they demonstrate that they are acting as effective data stewards. Organizations should have checks and balances in place, so their data stewardship is conducted effectively.  When organizations cause negative consequences that are consequential, they should take actions to mitigate those consequences. Enforcement agencies should have powers and resources so that they may act in a manner trusted by the public and seen as predictable by those subject to enforcement. The discussion paper suggests a risk-based approach to accountability that requires the risks to individuals associated with using data be considered co-equally with the risk to individuals of not using data. To achieve the dual goals of data driven economic growth and protection of individuals, the principles-based approach to individual rights and interests needs to evolve and consideration needs to be given to what it means to be an accountable data steward.  The IAF believes these dual, supporting pillars are key concepts as societies consider fourth phase fair processing governance to match the issues associated with the Fourth Industrial Revolution where the physical, digital and biological spheres come together. This paper can be found here and has been posted to a new section on the IAF’s publication page entitled Fourth Phase Frameworks.  In coming months, the IAF will be adding to that section. Please let us know what you think.

  • Legitimate Processing Invented here in the U.S.

    Privacy law began  in the United States when Congress enacted the Federal Fair Credit Reporting Act (“FCRA”) in 1970.  While framed as a consumer protection law, it was most certainly fair processing legislation. FCRA established rights of access, correction, and accuracy, but most importantly it created the concept of permissible purpose.  The concept of permissible purpose is the seed for international concepts such as legal basis to process and legitimate use. In writing the IAF’s response to the U.S. Government’s request for comments on “Developing the Administration’s Approach to Consumer Privacy,” the IAF reached back to those early concepts, and applied them more broadly to the observational world we live in today.   You may find the  IAF comments  here.   The IAF as a research organization will continue to participate in the development of a U.S. privacy framework that creates guiderails for data driven innovation and prosperity, encourages fair processing, and is interoperable with global approaches.

  • IAF BLOG- Data Ethics Must Translate into Sound Business Process

    Many of the world’s privacy, technology, and policy experts are in Brussels this week for the 40th International Conference of Data Protection and Privacy Commissioners.  The conference theme this year is Debating Ethics: Dignity and Respect in Data Driven Life.  The question is can ethics, not mandated by law, fill the gap between legal requirements and the moral requirement that processing related to or having an impact on people be fair and just.  The conference is being held in the capital of the European Union that put into effect the strongest data protection law, the General Data Protection Regulation (GDPR), in May of this year.  Yet the GDPR has gaps between the best way to protect people from less than fair processing when new technologies and new data use push the boundaries of what is necessary for the best outcomes in health, education, economic growth and transportation.  This week’s conference is intended to fill these gaps. Among the experts in Brussels this week is Stephen Kai-yi Wong, Privacy Commissioner for Personal Data Hong Kong China.  Earlier this year Commissioner Wong commissioned the Information Accountability Foundation to work with Hong Kong business to develop a framework for Ethical Data Stewardship and an Ethical Data Impact Assessments.  The challenge was to create a compelling and implementable framework for doing the right thing, for all stakeholders, in a legal system with an ombudsman structure for data protection. Experts from 23 Hong Kong enterprises, large and small, participated in this process.  The end products are a research report and a model assessment and oversight process framework   for the cascading of ethics from shared values to workable business process.  This framework was the subject of a workshop in Brussels. Most important, the framework updated the essential elements for accountability to encompass ethical data stewardship which IAF sees as part of the forth wave of privacy  legislation. Ethical data stewardship is most relevant when using data for artificial intelligence, advanced analytics, and other data intensive processes.  The framework also includes a workflow for assuring and demonstrating sound process.  That workflow includes ethics by design based on clearly articulated organisational values, assessment conducted independent of the persons developing the processing, and controls for oversight.  The framework is intended as a starting point for organisations to customise within their own culture but to do so in a way that demonstrates trustworthy data processing. The IAF thanks Commissioner Wong and his staff for commissioning this work.  IAF also appreciates all the work contributed by the Hong Kong business community.  While this project was conducted in Hong Kong, the framework has practical use and implications for all privacy regimes.  In part, that is why it is being released in Brussels.  To quote Giovanni Buttarelli, “data should serve people, people should not serve data.” Please provide feedback on these documents.

  • Evolving Accountability to Ethical Data Stewardship – A Key Part of Wave Four Privacy Laws

    In order to encourage innovation in their regions, digital information strategies are being adopted which recognize that the internet and digital technologies are transforming the world. These strategies address the needs of business, government and the public to impact the competitiveness of their country’s economy, while recognizing the protection of personal data and fair data processing are necessary for the development of Internet-based economies.  If individuals do not trust how organizations are using their data and how organizations are transforming data into information and information into knowledge, and the law is challenged to keep up with the technology, the full value and beneficial consequences to individual and groups of individuals will not be fully realized. As the complexity of internet and digital economies continue to grow, a key building block to addressing these goals will be how accountable data stewards act in a trustworthy and accountable manner. In 2015, the IAF questioned  What Does Information Accountability 2.0 Look Like in a 21st Century Data World? Since that time, terms such as data ethics and ethical processing are increasingly being used.  The popularity of these concepts stems from the rapid growth of innovative data-driven technologies and the application of these innovations to areas that can have a material impact on people’s daily lives.  The sheer volume of data that is observable and where inferences can be made as the product of analytics has and will continue to impact many facets of people’s lives, including new health solutions, business models, personalization for individuals and tangible benefits for society.  Yet these same data and technologies can have an inappropriate impact and even harm on individuals and groups of individuals and cause negative impact on societal goals and values.  An evolved form of accountability, ethical processing, applicable to advanced data driven technology, is needed to help enable the realization of the benefits of this use of data but address resulting risks. We are witnessing several trends. First, as technological and data impacting activities continue to challenge existing privacy laws, wave four privacy laws will take the positive innovations in the GDPR and add processes that let society benefit from the data driven fourth industrial revolution. We are seeing some of this tension play out in the United States as it is in the initial stages of creating a privacy framework to govern data for the next generation. These developments will require starting with a framework such as the IAF put forward that preserves thinking and learning with data so key to prosperity and innovation and that is interoperable with other regimes. However, these new privacy laws will take some time to fully mature. Second, and perhaps more current, as data impacting activities get more complex and the questions of consequential impacts to individuals get larger, there are growing calls for an ethical approach to data processing. There is also a growing trust deficit as to how organizations can be viewed as ethical, fair or trustworthy stewards of data. In short, calls for ethical data processing are responses to the need to address the broader fairness issues to an individual or impacts to an individual. These calls are why Privacy and Privacy Compliance will lessen in their ability to be fully satisfactory.   Wave four frameworks will consist of new ways to address individual rights and participation in data ecosystems. But, they will also reframe the expectations and obligation on organizations that act as data stewards. Acting ethically means organizations need to understand and evaluate advanced data processing activities and their positive and negative impacts on all parties.  This approach means organizations will need to be effective data stewards and not just data custodians. Data custodians manage obligations that are largely created externally and for them.   Data stewards consider the interests of all parties and use data in ways that create maximum benefits for all parties while minimizing risks to individuals and other parties. They ask whether the outcomes of their advanced data processing activities are legal, fair and just.  In other words, they operate from the belief that “just because you can does not mean you should”. This approach is similar to corporate social responsibility which encompasses the economic, legal and ethical expectations that society has of organizations at a given point in time.  Like corporate social responsibility, organizations have a corporate data responsibility which encompasses the economic, legal and ethical responsibilities they have with respect to the data they collect, create, transfer and disclose.  These responsibilities form the basis for data stewardship.  Like corporate social responsibility, ultimately, data stewardship is predominantly driven by organizational defined values or principles, policies, culture and conduct and not just technological controls.  Thus, the core question is: what does an appropriate trustworthy and accountable framework look like for a data steward? Enhanced Data Stewardship Accountability Elements In 2009, the accountability principle in the OECD Privacy Principles formed the basis for the Essential Elements of Accountability (Essential Elements).  In 2010, the EU Article 29 Data Protection Working Party issued opinion 3/2010 on the principle of accountability. The Office of the Privacy Commissioner of Canada and provincial commissioners in Alberta and British Colombia adopted accountability guidance in 2012. Hong Kong issued accountability guidance in 2014 and updated it in August 2018, and Colombia issued accountability guidance in 2015. Now, accountability is the foundation of the GDPR. The guidance and the adoption of the GDPR has elevated accountability from check-box compliance to a risk-based approach but has not fully kept up with the advanced data processing activities, such as AI and ML, that may be impactful on people in a significant manner.  To be able to transform data into information and information into knowledge and insight and knowledge into competitive advantage, for individuals to be able to trust data processing activities that might not be within their expectations, enhanced data stewardship accountability (Enhanced Accountability) is needed. The IAF, under a project commissioned by the Hong Kong Privacy Commissionaire, worked with approximately 20 Hong Kong organizations, to create the Ethical Data Stewardship Accountability Elements.  These Enhanced Elements call for organizations to: Define data stewardship values that are reduced to guiding principles and then translated into organizational policies and processes for ethical data processing. Use an “ethics by design” process to translate their data stewardship values into their data analytics and data use design processes so that society, groups of individuals, or individuals themselves, and not just the organization, gain value from the advanced data processing activities, such as AI and ML. Require Ethical Data Impact Assessments (EDIAs) when advanced data analytics may be impactful on people in a significant manner or when data enabled decisions are being made without the intervention of people. Use an internal review process that assesses whether EDIAs have been conducted with integrity and competency, if the issues raised as part of the EDIA have been resolved and if the advanced data processing activities are conducted as planned. Be transparent about processes and – where possible – enhance societal, groups of individual or individual interests; communicate the data stewardship values that govern the advanced data processing activities, such as AI or ML systems developed, and that underpin decisions widely; address and document all societal and individual concerns as part of the EDIA process and design individual accountability systems that provide appropriate opportunities for feedback, relevant explanations and appeal options for impacted individuals. Stand ready to demonstrate the soundness of internal processes to the regulatory agencies that have authority over advanced data processing activities, including AI or ML processes, as well as certifying bodies to which they are subject, when data processing is or may be impactful on people in a significant manner. The full Ethical Data Stewardship Accountability elements (link to full version) underpin or are the foundation to an ethical data processing business framework that includes Data Stewardship Values and an ethical assessment process.  The IAF believes these enhanced Ethical Data Stewardship elements will play a significant role in wave four privacy laws but, more impactfully, for organizations to demonstrate trustworthy data stewardship. This will be key for the success of digital strategies whether they be for an individual company or for an economy’s overall growth. Ethical Data Stewardship is but one key part of the Hong Kong project.  Please join us in Brussels 23 October at 14:30 at The Hotel, Room 23.5, for a side event on the Hong Kong project and its outputs.  To register, please use the link below.

  • IAF creates U.S. Privacy Framework Discussion Group

    The United States is in the early stages of creating a privacy framework to govern data for the next generation.  Over the past two weeks the IAF has issued blogs putting forward a framework that preserves thinking and learning with data and that is interoperable with other regimes.  We also issued a blog that placed the framework in the context of the Fourth Industrial Revolution and a fourth wave of privacy legislation .  The Information Accountability Foundation is creating a new discussion group to focus on the development of this updated privacy approach.  We will also bring the framing of new approaches to our Americas and Asia discussion groups as well. The discussion groups will share ideas among themselves and provide guidance to the IAF research and education mission.  One of the discussion group’s first tasks will be to explore regulatory structures related to the framework.  The IAF’s draft framework provides a great deal of flexibility to organizations that have demonstrated accountability processes that are competent and trustworthy.  A key question is how would one enforce the intentions of a law that is  flexible without being prescriptive? In many ways, the IAF West Coast Summit on September 26 will kick off this work. If you would like to be part of that discussion group or would like to attend the Summit, please let Marty Abrams know at mabrams@informationaccountability.org .

  • Fourth Privacy Legislative Wave for Publication

    Privacy legislation again is a hot topic in the United States.  The California Consumer Privacy Act has added to the pressure provided by the European Union General Data Protection Regulation (GDPR).  Think tanks, trade groups and consumer organizations are all proposing frameworks for the United States.  The Information Accountability Foundations (IAF) is one of those groups.  Even the U.S. Chamber of Commerce has published principles and indicated it is time for a new law.  In part, the proposal of these frameworks is defensive to preempt the new California Consumer Privacy Act.  However, for many of us, it is an opportunity to set the guiderails for the fourth industrial revolution (FIR).  According to Wikipedia, the FIR is “characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres, collectively referred to as cyber-physical systems.”  These proposed frameworks in the United States are part of a global movement to define the rules for the FIR.  The IAF sees the efforts in the United States as fitting into the fourth privacy legislative wave.  Each wave links governance to the accelerating pace of change, and the fusion of everything digital and genomic surely meets the test of technology change. The first privacy legislative wave was the 1970’s legislation that began with privacy laws in the German state of Hessen and the U.S. Fair Credit Reporting Act.  The trigger was the adoption of mainframe technologies.  Wave two began with the enactment of the 1995 EU Data Protection Directive and was responsive to the integration of data into all forms of commerce.  The GDPR is the third wave driven by advanced analytics and observational technologies.  Wave four will be the new laws that emerge as countries deal with both the adequacy requirements and expectations created by the GDPR and the observational, interactive, predictive, and action-oriented nature of cyber-physical systems.    Privacy Legislative Waves Wave One (early 1970s):  The first wave began with privacy legislation in the German state of Hessen, in Sweden and the U.S. Fair Credit Reporting Act.  The wave was triggered by the growing use of mainframe computers.  It continued with the OECD Privacy Guidelines, the Council of Europe Treaty 108, numerous sector specific laws in the United States, and European privacy laws that predate the EU Directive.  These laws were designed to create controls for individuals related to their data and/or to fix specific risks of harm related to a particular data use, such as drivers’ license information. Wave Two (mid-1990s):  Wave two began with the EU Data Protection Directive which required each of the EU states to enact conforming legislation.  The problem being solved was concern that different privacy standards would limit commerce that was increasingly driven by data.  There was clear understanding that fundamental rights needed to be protected to facilitate continent wide data flows.  The adequacy provisions in the Directive had the effect of encouraging privacy legislation in trading partner countries.  While the Directive established the requirement that all processing be conducted under one of six legal bases, with consent being one of those legal bases, the conforming laws were mostly consent based.  Most legislation adopted outside Europe also required consent.  Laws were enacted in the Americas, Asia, Africa and non-EU European states.  Accountability, for the most part, was inferred rather than being explicit.  The exceptions were Canada and Mexico where accountability was an express provision of the privacy law.  As advanced analytics and communications technologies, such as smart phones, emerged, the drive to add accountable practices, such as privacy by design, became increasingly encouraged. Wave Three (mid-2010s):  The GDPR is wave three.  The Internet drove the expansion of observational data, which in turn drove a revolution in analytics.  In this more complex ecosystem, explicit accountability requirements are seen as necessary.  Therefore, the GDPR places greater emphasis on six legal bases to process data and limits consent to where it is fully effective.  It requires accountability measures, such as privacy by design, data protection officers, and data protection impact assessments.  While it broadens use by placing less emphasis on consent, it restricts use by prohibiting profiling with legal effect unless one has consent.   The GDPR is intended to drive comparable laws in other regions.  Brazil’s new privacy law is comparable (dependent on the enforcement mechanism that has yet to be developed).  However, the size and complexity of the GDPR and its link to concepts of European citizenship make replication difficult.  Furthermore, it is not fully responsive to cyber-physical systems. Wave Four (2020s): Wave four privacy laws will take the positive innovations in the GDPR and add processes that let society benefit from the data driven FIR.  Organizations that wish to use data beyond the common understanding of individuals will have to demonstrate they have mechanisms for good data stewardship.  Further, they will need to be transparent about their values and have effective governance structures that include ethics by design, comprehensive assessments and independent oversight.  Persons will have understandable and actionable individual rights. Wave four regimes will have the necessary provisions to be interoperable with the GDPR. One might ask if part of the motivation is GDPR adequacy and expectations, why this next wave is not just an extension of wav three?  The answer lies in the interplay between the cyber and physical worlds that is critical for the FIR to actually operate.  As new devices are added to the personal ecosystem, they will be interactive with every other sensor driven device.  Sensors will trigger actions that will avoid collisions, interact to maximize medical treatments, and make crosswalks safe for children and seniors.  At the same time, autonomy must be respected, and fair processing will need to be mandated to avoid loss of freedom, unfairness and digital predestination.  The GDPR has pioneered provisions that are absolutely necessary for the FIR, like legitimate use, use within context, and accountable persons and processes. However, the GDPR is conflicted on research, analysis (i.e., profiling) and automated decision making that is a fundamental part of how the new technology operates.   In part, the global discussion of ethics by design reflects the greater complexity when sensors need to be interactive with each other to operate.  These ecosystems continually will profile, learn and make decisions for people.  The fourth wave will provide guiderails for this fast movement from big data to big action. Concepts, like a two-phased – thinking and learning – approach to big data, will be challenged by this new environment.  Governance needs to address this challenge. The purpose of the IAF’s U.S. Framework is to generate discussion on legislative objectives and design in the U.S. and other places around the world that might adopt the governance demanded by the FIR.  Furthermore, IAF is working with the Privacy Commissioner for Personal Data in Hong Kong on the data stewardship tools necessary to help organizations make sound decisions in their use of observational data and advanced analytics. The IAF West Coast Summit on September 26 will focus on these topics.  Furthermore, the IAF will begin a U.S. privacy framework discussion group and will make future frameworks part of its Americas and Asian discussion groups’ agenda.  Please let us know what you think.

bottom of page