top of page

Search Results

95 results found with an empty search

  • The IAF Celebrates Ten Years of Focusing on Accountability and Governance

    The IAF celebrates its 10th anniversary in 2023, and recently published our annual report –  2022 Highlights and 2023 Policy Directions . Over the last few years, IAF research and education work has converged under the overarching principle under which we were formed – Accountability and allowing data to serve people. Specifically, our work triangulates Risk, Corporate Research and Proportionality to address the policy and strategy concerns of IAF members and community. Forward-looking organizations are applying and enhancing strategic frameworks to meet the demands of an observational world, responsible and resilient data use, and the expectations for fair AI.  The IAF believes it’s critical that accountable organizations be able to think with data and engage in knowledge discovery and creation, within accountable frameworks, in order to achieve a trusted global digital ecosystem. The pathway to innovation is clear. Break the path at any point, then actions taken will be less than optimal. Bad decisions, or even the reluctance to take actions that should be taken, may harm people. There are no new actions – such as cancer therapies, smart car safety, pollution abatement, and specialized education – without the ability to think with data to create knowledge. All IAF research and education endeavors are focused on furthering the ability for demonstrably accountable companies to use data pertaining to people to maintain the data innovation path.  In 2022, we saw increased pressure and scrutiny on digital economy activities, from AI applications, to AdTech, from increased cybersecurity threats to international data transfers, and crucial issues arose about individual body autonomy and safeguarding our youth. Here or just around the corner is the activation of several U.S. State privacy laws plus anticipated legislation in Canada (C-27) and the UK, and European Regulation (Digital Services and Digital Marketing Acts). The U.S. Congress renewed it evergreen debate about Federal Privacy Legislation.  Read more about the IAF’s 2022 Highlights and 2023 Policy Directions  here . We look forward to seeing you throughout the year at our informal chats, monthly policy and strategy calls, annual retreat and workshops. We will be focused on governance and what it means to think and act with data so that data serves people.

  • Making Data Driven Innovation Work

    It is increasingly understood digital agendas in both the public and private sectors are about creating safe pathways for data to turn into information, for information to be turned into knowledge and for knowledge to facilitate actions that are societally beneficial to people. Data enables technology such as Artificial intelligence (AI) technologies have significant potential to transform society and people’s lives. The U.S. National Institute of Standards and Technology states “AI technologies can drive inclusive economic growth and support scientific advancements that improve the conditions of our world.” [1] Less understood or acknowledged is the distinction between data used to create knowledge and the actions that may result from using or applying this knowledge. Knowledge creation also can be called “thinking with data.” It is key for processes such as data analytics used in product or service development and improvement. It may more broadly be called “data-driven or digital innovation.” The creation of knowledge differs from applying this knowledge. Knowledge application utilizes knowledge created on a specific or identifiable set of individuals. Understanding the differences between knowledge creation and knowledge application, and the purposes for which they are being undertaken as well as the different set of risks for each, is critical to understanding how they should be regulated. Knowledge creation in and of itself is less directly impactful on individuals. The application of this knowledge leads to decisions and actions that can impact individuals and brings into consideration traditional privacy concerns. Knowledge creation does not have the same concerns and results.  Knowledge creation and knowledge application should not be treated the same way.  However, today, regulatory approaches, including regulation, oversight, and enforcement, do not necessarily treat these two processes differently.  Both processes require controls, but since the risks are not the same in knowledge creation and knowledge application, the controls should not be the same. At its extreme, polarization is affecting the public policy debate on privacy and data protection. Terms such as autonomy and control increasingly are being interpreted as personal sovereignty. Policymakers designing the new data-driven industrial policy are talking past independent regulators who are concerned markets are dominated by data extraction that harms all individuals, particularly protected classes. This situation is making legislation and regulation that both facilitates innovation and protects the full range of stakeholder interests increasingly difficult. This void has led some organizations to delay or forgo the creation of these insights. As a result, the trajectory of the application of data protection public policy has the potential to stifle further data driven innovation. The Information Accountability Foundation (IAF) believes it is critical that organizations be able to think with data and to engage in knowledge discovery and creation in order to achieve a trusted global digital ecosystem. The IAF has advocated for many years that there should be a distinction between knowledge creation (thinking with knowledge) and knowledge application (acting with knowledge). Increasingly, knowledge creation leads to the development of new insights key to digital innovation.  However, the current pattern of data protection law evolution, and the enforcement of these laws, which does not appreciate the distinction between these two processes, has led to confusion and hesitancy about when advanced analytics for knowledge creation may be used where there is no distinct legal basis for processing personal data for this purpose. The IAF undertook research to understand how organizations discover and create new knowledge and is pleased to publicize the results of this research in our report Making Data Driven Innovation Work . The IAF research team conducted extensive interviews with organizations from numerous fields that use data to create knowledge, even if they did not identify their processes in that way.  The interviews were used to supplement the team’s decade of work as researchers at the IAF and consultants working on ethical assessments and demonstrable accountability. This project sought to clarify the impact of regulatory and public policy uncertainty on commercial-driven knowledge creation, develop scenario driven examples of this impact, develop a public policy model that enables responsible data-driven knowledge creation through a series of compensatory controls, and create a narrative and path for knowledge creation to be more formally recognized as legitimate data processing activities in next generation privacy and data protection law. In summary, many organizations use personal data as part of analytics processing (Corporate Research) to solve identified business problems, but most organizations do not use Corporate Research as a distinct processing activity more broadly because: Most organizations generally do not break data processing into two distinct phases: knowledge creation (I.e., the research function to identify a solution to a business problem) and knowledge application (I.e., application of the solution to the business problem), and/or In the EU, this portion of data processing (research) is complicated and/or limiting.  Personal data can be processed for scientific research purposes, which is narrowly defined, as long as sufficient safeguards have been implemented and scientific research is run “in accordance with relevant sector-related methodological and ethical standards, in conformity with good practice” (collectively Safeguards), and/or Other countries and future laws limit the use of personal data for research purposes to improvement/supply of products and services or to requiring the use of de-identified personal data for research and for socially beneficial purposes. The IAF thinks that organizations might be able to use personal data more broadly for Corporate Research if, in addition to legal and regulatory modifications, appropriate Safeguards are put into place and proposes that those Safeguards be developed and implemented in a defined “Research Sandbox” environment. The project developed potential safe pathways for data to turn into information, for information to be turned into knowledge and for data to facilitate actions that are societally beneficial to people, all where processing is conducted in a manner that is lawful, fair, and just.  The IAF appreciates that this level and type of paradigm shift will require input from key stakeholders, including regulators and organizations.  In short, this research report is just the start, and the IAF hopes to obtain further input through this project which includes workshops in 2023. [1] Artificial Intelligence Risk Management Framework (AI RMF 1.0) ( nist.gov )

  • Who Is Paying the Policy Debt?

    Everyone! There is a technology concept called the “tech debt.”  It accumulates over time when software development applies the easy and quick incremental answer to complete or  change a project or fix glitches in a system.  Over time, one short term fix is layered one upon another, compounding over time. [1]    Eventually, the interest on that debt needs to be repaid.  There is a corollary in the information policy world that could be referred to as “policy debt.”  It occurs in both the public and corporate policy arenas and links to the innovation cycle, where any break in the cycle, be it data, information, knowledge or action, impacts business resiliency.  The policy debt has been accruing interest since the mid-1990’s, and the interest payment seems to be coming due now. Western liberal democracies are based on capitalism and its corporate structure.  Corporations have shareholders that elect boards to govern the corporations for the benefit of stakeholders.  Those boards are responsible for four governance principles:  Strategy, financial performance, business resiliency and compliance.  Which of those four principles are touched by current regulatory direction and privacy laws that lag digital technologies?  Obviously, compliance is first, with billion-dollar fines for failure to comply.  But the debt payment does not stop there.  Regulators are now requiring that data be purged, and that the software developed with the data be erased as well.  In Denmark, regulators required that schools brick student computers linked to the cloud.  These actions trigger business resiliency.  In the latest Meta case, the European Data Protection Board questioned whether Meta’s advertising-based business model is actually a legitimate business strategy.  Three out of four governance principles are triggered by regulatory attempts to pay down the policy debt.  What follows is from my first-hand experience with the policy world dating back to 1988 when I went to work at TRW as consumer policy director.  I visited Brussels when the Data Protection Directive was being drafted, sat at the table when the U.S. FTC and States took on the credit reporting industry, and most important as the regulation of the consumer Internet was debated in Washington DC meeting rooms.  It is that Internet debate where the policy debt acceleration began.  Third party cookies were introduced in the mid-1990’s as a means to facilitate an advertising funded consumer Internet.  This approach required triggers on browsers that linked to tracking software stored on consumers’ hard drives.  Was this tracking within the public commons or was it comparable to families’ homes, where some level of seclusion was expected?  The policy decision was to not answer this question and to push the answer to the future when the consumer Internet was better established.  Instead, the focus was on transparency in the form of privacy notices that, over time, would get increasingly long and dense.  The interest due on this policy decision has been accruing. That policy indecision facilitated nearly two generations of digital and economic growth – new products and services, the rise of new brands and jobs — all fueled by personalized advertising. It made possible many new business models and in some cases disinformation and manipulation. Eventually it focused policymaker attention on that policy debt. That first generation of observational technologies became not just a means for facilitating advertising; it also became central to how things actually work in the interconnected world in which we live.  Smart cars, medical devices, cyber security, fraud detection, communications, and the whole Internet of Things was and is dependent on observation.  How is a means for supporting a digital ecosystem fueled by observation created without paying this debt? Many academics and others have given this observation a name: “surveillance capitalism” which suggests simple solutions to paying down the policy debt’s accrued interest.  Terms like data minimization, do-not-track, do-not-sell become part of the policy vocabulary.  But simple answers rarely work.  As Professor Dan Solove points out in his new article, “Data is What Data Does:  Regulating Use, Harm and Risk Instead of Sensitive Data,” legislation based on use, harm, and future harm in the form of risk is complicated, and regulating on the proposition that some data is more sensitive than other data is simple but not effective.  Some would say that the EU GDPR was designed to create a pathway to paying the policy debt, but the essential nature of observation and the new technologies that it facilitates have made implementation of the plain language of the GDPR and GDPR inspired legislation in other regions less than optimal.  The GDPR was intended to be risk-based, but the GDPR did not define the risks that are to be considered as organizations manage digital processes.  Data protection and privacy speak to three tasks:  assure some a space not subject to observation, where private thoughts and family life might prosper; allow for people to define themselves and not be defined completely by their digital waste; and have fair outcomes when data is processed.  Does the risk-based approach place the emphasis on personal controls over observation and processing – data subject rights, or on the fairness of outcomes? Both sides of that equation are important, but risk management requires prioritizing one over the other.  The IAF work on “risk of what” has led us to understand that visualizing risk is not one outcome versus data subject rights, but rather dependent on a stakeholder’s first impression of what is most at issue.  Stakeholders go beyond the data subject and the controller and include parties impacted by the processing that are not the active participants. So, a risk-based policy system must be stakeholder based. There is a great deal of evidence that privacy regulators are doubling down on data subject rights, with a narrow focus on one stakeholder, the data subject. Recent cases have focused on narrowing legitimate legal bases, requiring transparency with conflicting values of simple and complete, and necessity that reaches to the legitimacy of business processes. So, as we celebrate privacy week, let’s spend a minute thinking about the policy debt.  We might think about new policy models that embrace the complexity of the digital age, considering all stakeholders’ interests and make sure the policy debt is paid in a fashion that is equitable to all in the many roles they play:  data subject, patient, employee, citizen, student, shareholder, pensioner, etc. [1] Technical Debt https://www.productplan.com/glossary/technical-debt/

  • A Principled Approach to Data Protection Risk Balancing

    Data is necessary for information, and information is necessary for knowledge.  Knowledge can be used to advance as well as hinder individuals’ rights and interests.  The individuals’ right to autonomy needs to be balanced with other interests that individuals, society, organizations, and government have.  Data turned into information may have value to the person to whom the data pertains, but that information also may be impactful on a whole population of individuals that share attributes.  Data protection analysis usually is focused on the data user and the data subject, but there are a full range of interests and stakeholders that should factor into that analysis.   The IAF “Risk of What” project examined the dimension of risks to those stakeholders and how different stakeholders’ risks could be graphically represented.  Since 2015, the IAF has conducted numerous projects on assessments.  Starting in early 2020, the IAF team has been exploring the application of the proportionality principle.  In doing so, the IAF brought the findings from the other projects together to analyze a principles-based approach to risk balancing.  The IAF’s newly published paper, “A Principled Approach to Risk and Interest Balancing – Multi-Dimensional Proportionality,” explains that the meaning of proportionality in the private sector is broader than it is in the public sector because many more factors need to be considered and weighed.  The “Multi-Dimensional Proportionality” Project is an ongoing project so as to continue to give depth and dimension to the proportionality principle in the private sector.  This project will continue into 2023.

  • Time for an Unfairness Rule on Fair Information Processing

    The IAF has submitted its comments on the FTC Advanced Notice of Proposed Rule Making on Surveillance Capitalism and Security (ANPR).  The IAF staff thinks it is time for the FTC to issue an unfairness rule on the fair processing of data pertaining to people.  However, that rule should be tailored to preventing a set of adverse processing impacts that are designated in the rule.  The comments reflect the views of the IAF staff and not necessarily those of the IAF Board or members.    Data minimization and purpose limitations are broad concepts, and, as defined by the FTC, so is corporate surveillance.  Bright-line rules based on broad concepts are very tantalizing, and they should be very rare.   An example of a bright-line rule is found in the 1992 FTC consent orders with TRW Information Systems & Services and Equifax that established that any combination of tradeline information and identifying information from a consumer reporting agency was a consumer report.  Those consent orders were a privacy case that essentially outlawed a marketing support service called reverse bank card append.  Since 1992 such bright-line privacy cases that outlaw a line of commerce have been few and far between.   Appropriate data collection and use are very context driven and need to be assessed in context. As a result, privacy laws in Virginia, Colorado and Connecticut require risky processing be assessed for harm to individuals, groups, and society at large.  Likewise, the IAF continually defaults to risk assessments measured against a yardstick of adverse processing impacts in its model legislation, THE FAIR and OPEN USE ACT , and such risk assessments are a key point in the IAF’s comments on the ANPR.  There has been criticism that the ANPR is too broad.  Given the nature of a digital society, the breadth of questions in the ANPR is respectful of the extent of the issues related to broad data collection/creation, transformation of data into information and knowledge, and then using that knowledge to create action. In responding to the ANPR, the IAF relied on research conducted by IAF principals dating back to the early 2000’s.  The IAF’s comments addressed the difference between observation and commercial surveillance, the importance of accountability, and the need for a clear set of standards to assess risks and benefits against.  As mentioned above, the IAF also addressed the weakness associated with bright line principles, such as data minimization and purpose limitations, particularly as it relates to knowledge discovery (the use of data to create new insights).  The IAF sees the ANPR as another milestone in establishing fair processing guidance for commerce in the United States.  The process would be useful even if Congress were to enact a new privacy law because the research submitted would be applicable to any rulemaking required by a new law.  The IAF team welcomes your thoughts on its ANPR comments.

  • Let’s Return Risk to the Risk Based Approach

    Professor Lokke Moerel is right.  The European data protection commissioners have done their best to use cases, guidance, and opinions to write the risk-based approach out of the General Data Protection Regulation (GDPR).  Professor Moerel’s article , “What happened to the Risk Based Approach to Data Transfers?,” looks at the law, legislation history, guidance and the European Court of Justice opinions to prove that accountability, the basis for the risk-based approach, is about responsible data use, not just about evidence of compliance.  She also documents the pushback during the legislative process from the Article 29 Working Party on responsible use as the key to accountability. The Galway Project, which began in 2009 (2009 also was the first year of the Global Accountability Dialog), defined the modern data protection accountability principle and gave rise to the essential elements of accountability.  At the Project’s very first meeting, the group, comprised of data protection and privacy regulators, NGOs, and business, defined accountability as being responsible and answerable for the rights secured by data protection.  Almost from the beginning, there was a framing by some agencies that accountability should be used to correct weaknesses in basic privacy protections by forcing controllers to document their work.  But documentation of work is the second part of the equation — being answerable.  The question is answerable for what?  The response is being a responsible steward for data.  The risk-based approach was the great promise for the GDPR.  It would put new obligations on organizations but would also create the pathway to fulfilling the full range of human rights and interests that come into play in an information age.  Organizations would identify stakeholders and their interests and would assess whether data use was in balance by conducting data protection impact assessments (DPIAs) in a competent and honest fashion.  That documentation then would be available to regulators who would judge whether the organizations were using the data in a person forward fashion. But this approach only works if regulators recognize that accountability requires organizations to be responsible and then answerable for being responsible.  Regulators in Canada, Hong Kong, Colombia, the Philippines, Australia, and Bermuda recognize this concept.  In a 2013 meeting with the Article 29 Working Party, I was asked what was the one thing European DPAs could do to encourage accountability, and I said duplicate the guidance issued by the Canadian commissioners.  That guidance was never issued in Europe. In 2021, the IAF conducted a project, “Risk of What,” to identify why consensus on risk was so difficult in the privacy/data protection field.  The “Risk of What?” project led to the learnings that stakeholders have a first take on worst case and that each stakeholder had their own worst case.  These learnings led the IAF to suggest the graphic representations of stakeholders, fundamental rights and interests, and adverse outcomes would begin to resolve the issue.  But that type of resolution only works if the objective for data protection is responsible use.  The overarching GDPR requirement that processing must be fair leaves the impression that organizational responsible use is possible. However, Professor Moerel reminds us that recognition and application of the risk-based approach, the basis for accountability, requires an understanding of the GDPR’s legislative history, guidance, and court decisions.  The IAF will continue to advance accountability as the responsible information governance system in this complex digital ecosystem , but it would be helpful if regulatory authorities would read Professor Morel’s article with an open mind.

  • Location Data and Evidence for the FTC ANPR

    I was well into writing a blog related to the Federal Trade Commission’s advanced notice of proposed rulemaking (ANPR) on “Commercial Surveillance and Security” when the FTC announced it had sued Kochava, a data aggregator whose services are location based.  This suit follows the Federal Communication Commission’s announcement that FCC Chair Jessica Rosenworcel had released information from 15 mobile phone providers on their use of location data.  Chair Rosenworcel said, “This information and GEO location data is really sensitive.  It’s a record of where we’ve been and who we are.”  These press releases by the FTC and the FCC complicated my blog writing.  From every side and direction, the seeing and recording of behavior is being framed as surveillance, specifically “commercial surveillance”.  The most common example of surveillance is location data.  Regulator after regulator is focused on location data.  To be perfectly honest, it isn’t the location; it is the context of the location.  What information other than the device is being tracked?  Is the location a hotel? A store? A bar? A health center?  A women’s health center?  But while location is the issue of the day, the debate will impact all observation.  Any smart device that has sensors is in scope.  All smart devices are observing, and they are observing for a set of purposes. Furthermore, seeing can be separated from remembering.   However, not all location information needs to be treated the same.  Location information can be considered special; boundaries can be put around its use.  Location data is special, for all the reasons noted by the FCC chair. The FTC’s ANPR is another step in the necessary global discussion of what fully balanced data protection means in an observational age. The ANPR led me to reread the chapter on surveillance in Professor Neil Richards’ book “Why Privacy Matters.”  My favorite statement from that chapter is: “. . . a more sophisticated understanding of surveillance reveals a problem:  we need an explanation for when and why surveillance is dangerous and when it is not.”   The New York Time recently printed an opinion piece by Alex Kingsbury,  We’re About to Find Out What Happens When Privacy is All But Gone. Kingsbury starts off with the fact that all smart phones by their very nature are surveillance devices.  Observation is a necessity if the phone is to work.  It needs to know where the user is to function as a phone.  While some apps need to be observational, others do not need to be.  The navigation app could work like old static paper maps, but users would not be happy.  Maps and navigation systems need to know the user’s location in order to function for the user’s purposes, but sometimes location-based ads are serving the user’s purposes, like the pizza restaurant nearby, and sometimes they are serving someone else’s, like an algorithm that predicts if I can be enticed to eat pizza in the first place.  The FTC’s ANPR uses the term “Commercial Surveillance.”  Use of the emotionally loaded term “commercial surveillance” is an issue. I prefer to use the terms “observation” and “observed data” because they better describe what the roles watching and recording mean in a connected age where some observation is necessary for things to work, some observation is recorded for nefarious purposes, and other observed data is repurposed for political control.  While I understand the attractiveness of the term” commercial surveillance,” it is not useful because it places too much emotion in a discussion that requires dispassion.  The term “surveillance” suggests a target and an intended outcome.  Observation is a more neutral word.  In 2014, when I wrote the “Taxonomy of Data Based on Origin,” I used the term “observed data.”  The governance of observed data is intended to answer the question raised by Neil Richards “When is the use of the observed data dangerous and when is it not?”.  The New York Times opinion piece is useful but not nuanced enough.  Data policy governance covers three privacy interests: seclusion, autonomy, and fair processing.  Observation is a growing feature of our digital age that overwhelms seclusion, making governance for autonomy and fair processing ever more important.  I absolutely agree that there is frivolous observation, where observation is a power, not an empowerment, tool.  But, by framing the ANPR as about corporate surveillance, an emotional driver is triggered that gets in the way of precise policy making.    The IAF’s comments on the ANPR will start with the premise that data should serve people and do so with respect for the full range of peoples’ interests impacted when data is created and used and used to create even more data.  Professor Richard’s point is front and center in this discussion, but I would frame it differently.  I would start with a recognition that observation is necessary for connected technologies to work.  It also is necessary to create new insights. In order for both points to be understood, it is necessary to ask several questions:  When is observation targeted in a manner that controls others’ basic interests?  When should observation be off limits to protect secluded space?  How are only appropriate levels of autonomy and processing used so that the use is fair?  How are processes governed, illuminated and addressable by external oversight and enforcement?  Should processes be governed so they serve the individual’s interests and not someone else’s?   When is retention a matter of asymmetric power?  The nuance in these questions, and the answers to them, are important.

  • Assessing the ADPPA for Improvements that Would Not Impact Basic Design

    The views expressed are Martin Abrams’ and shared by other members of the IAF team.  They do not necessarily reflect the views of the IAF Board or other members of the IAF community. I have been waiting patiently for comprehensive U.S. privacy legislation for over 20 years.  So, I am overjoyed that the “American Data Privacy and Protection Act” (“ADPPA”) has been voted out of committee.  The ADPPA has a great deal going for it, both substantively and politically. For example, I like the requirements that organizations must be responsible actors and that risks related to the processing of data pertaining to people must be measured and managed.  The ADPPA, however, is not perfect.  There is room for improvement, particularly around clarity of terms, setting a basis for assessments, and future proofing the legislation without upsetting the political applecart. The political calculus for the ADPPA includes carefully structured compromises on preemption and private rights of action.  Enacting and implementing privacy legislation is a delicate balance, and any changes may upset that balance.  Odds are that the ADPPA will not be passed prior to the Fall elections, so there is time to think about whether the ADPPA might be improved substantively, without changing its design in a fashion that might make passage more difficult.  Americans want privacy legislation.  Consumers/citizens want to be watched only when it benefits them, want fewer surprises, want a sense that their data is providing value to them (not just to a company) and want the prevention of bad outcomes.  Businesses want stability and predictability, want less international stress around data transfers, and want to be able to use data to solve problems.  Based on these goals, three objectives for privacy legislation can be articulated: (1) protect individuals from the adverse impact of unreasonable processing of data related to people; (2) create a basis for trusted transfers of data to the United States (if national security uses are solved); and (3) create a pathway for trusted, fair processing of data for tomorrow’s necessary data uses.  The stated charitable purpose of the Information Accountability Foundation (IAF) is research and education pertaining to information policy.  The IAF team has nearly two centuries in privacy public policy experience.  After the ADPPA was drafted, keeping the three objectives set forth above in mind, the IAF team developed a set of criteria to evaluate whether the ADPPA (and similar pending privacy legislation) met their intended objectives. This blog only lists the criteria for analysis and not the results of any analysis.  Future blogs by IAF staff will make suggestions where the legislation might be improved.  The criteria are as follows:     Are the objectives for the legislation clearly set forth in specific terms (such as harms to be avoided)? Are the objectives for the legislation vaguely generalized, hard to define or a requirement without a clear objective (such as protect privacy, give individuals control, or mandate data minimization)?  Do the objectives set the tone for the legislation? Are important terms clearly and consistently defined? Is there ambiguity because important terms are used differently and inconsistently? Does the legislation use terms that are product – or service – agnostic so that the legislation can evolve with time? Does the legislation find a route for data to be used when creating new insights are the objective, like legitimate interests is framed in pending Canadian legislation, with reasonable safeguards that are practicably actionable? Given the importance of cyber security, does the legislation create a route for using advanced analytics for detecting, defending, and responding to cyber security events?  If it does create such a route, does the legislation require organizations to put those systems into effect? Does the legislation permit processing for standard business practices without consent but with reasonable controls?   Does the legislation contain an actionable and demonstrable methodology for data supply chain governance? Does the legislation require demonstrable programs to assess and measure the likelihood of adverse processing impacts to individuals and groups of individuals taking place?  Where these impacts are balanced against clear benefits, does the legislation set forth who has authority to require demonstration and in what form? Are there requirements and means for protecting against the defined adverse processing impacts, and for updating those protections as technologies and business processes evolve (i.e., without requiring new federal legislation)? Does the legislation have enough enforcement muscle that encourages adherence by covered entities?  This includes enforcement agencies with the resources to effectively deter bad behavior. Are there obligations for transparency that go beyond privacy notices that meet explainability to consumers’ standards and that increase the visibility and understanding of processing of data pertaining to people? Can the legislation be explained in an international context to our trading partners?  Are data subject rights meaningful and actionable by individuals and implementable by covered entities? Have these rights been balanced against unintended consequences? Lessons from the GDPR policy process are relevant to assessment of the ADPPA.  The GDPR took four years to enact and went into effect two years after enactment.  The GDPR’s adaptability to change was based on the six legal bases, the intent that it be risk based, the guidance from recitals, and the ability to define uses as legitimate through demonstrable privacy by design.  There was an understanding that there were conflicting elements, and there was a hope they might be resolved through the regulatory process.  A number of short comings have been recognized now that are not easily resolvable, without further legislation.  The IAF suggests that some clarifications to the ADPPA now would enhance implementation and enforcement of the ADPPA over time, thus attempting to avoid the GDPR conundrum. The ADPPA brings renewed hope for substantive privacy legislation that offers meaningful protections to Americans, and the IAF believes the criteria articulated in this blog will help make that hope a reality.

  • IAF Leadership Continuity

    By Scott Taylor  IAF Board Chair  I am pleased to announce that Barb Lawler has been elected President of the Information Accountability Foundation, and Marty Abrams has been elected to the new official position of Chief Policy Innovation Officer.  Marty has been Executive Director since the IAF founding in 2013.    The IAF family, members, and other participants know Barb and Marty well.  Barb has been active with the IAF since its founding, first as a member, board member, corporate secretary and finally Chief Operating Officer.  She led privacy programs at three different companies, testified to the U.S. Congress and California officials, and has been involved in policy development for more than 20 years.  Barb brings programmatic focus that complements Marty’s policy vision.  What does this mean for the IAF agenda?  Barb will lead the organization leaving Marty to conceptualize what policy should look like in the future so data might serve people in a manner that is legal, fair and just, creating a pathway for trusted corporate data driven innovation.  The challenges are greater today than they were in 2013 when the IAF was incorporated.  The IAF’s brand and approach for policy innovation is needed and this new structure enhances the possibilities.

  • The Summer of New Data Protection Redirection

    Canada and the United Kingdom are putting forward fair processing ideas that I believe will lead to a new direction in the way data protection is being thought about.  These ideas follow a trend set when Singapore revised its data protection law making legitimate interest more functional and directly linked to accountability.     Why is this new direction so important?  The linkage between probabilistic programing, in its most advanced form, and the protection of individuals from harm is what is at issue in an age of data driven decision making and almost unlimited computing power.  Advanced probabilistic programing includes but is not limited to artificial intelligence and the processes, like machine learning, that are used to make AI work.  Probabilistic programing, when used in its most beneficial manner, saves lives in both medicine and national security and improves business and service processes that enhance both human existences and corporate profits.  Used in a careless or malevolent manner, probabilistic programing at best strips human self-determination and at worst causes loss of life.  This truth requires legislation be more explicit in its achievable objectives, be more flexible in its processes, and be more adaptable in its enforcement.  The draft legislation most interesting to me is Canada’s Bill C-27, “An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act.”  C-27 links the governance of personal data with the regulation of AI that relates to human behavior.  Bill-27 has three parts.  The first part, the Consumer Privacy Protection Act (“CPPA”), will replace the current private sector law, PIPEDA, and the third part, the Artificial Intelligence and Data Act, will govern AI.  This blog is not intended to be a review of C-27; others have effectively done this. The intent of the proposed CPPA is to govern the protection of personal data while recognizing the need “for organizations to process data for purposes a reasonable person would consider appropriate in the circumstances.”  So, the proposed legislation establishes the concept of context and the expectations of reasonable persons.  The privacy and data protection part of this bill then translates the obligation that organizations be accountable into a comprehensive privacy management program.   The bill also establishes that processing can only be for defined purposes and that those purposes must be reasonable.  The required privacy management program must be staffed by persons with the skills to determine whether described purposes are reasonable.  Those purposes must be public so the organization can be held to account.  Being Canada, with a long history of consent being a privacy process requirement, the bill is consent based.  Bill C-27 does have an expanded set of exceptions that are very accommodating to modern processing.  The first is an exception to consent for business operations.  The first of those business operations is business activities as long as those business activities do not include processing for the purpose of “influencing the individual’s behaviour and decisions.”  Another business operation is a Singapore like approach to legitimate interests.  Here too there is the prohibition on influencing.  However, political parties are not covered by the draft legislation, and politics is where the manipulation that C-27 intends to protect against has been most controversial.  Included in the list of business activities exception is data security.  There are additional business operations that include de-identification and internal research, with internal research without consent only allowable if data has been de-identified.  These exceptions create a gateway for using data for AI processes such as machine learning where influencing is not the intent.  Part three of Bill C-27 regulates artificial intelligence in the private sector, but it is limited to AI that impacts people and meets the definition of a “high impact system.”  The parameters for a high impact system will be defined by a new office within the Ministry for Innovation, Science and Economic Development.  Bill C-27 defines both bias and harm, providing some clarity for businesses that must determine whether AI is highly impactful.  Turning to the United Kingdom, the Department for Digital, Culture, Media & Sport response  to its consultation “Data:  a new direction” (DCMS Proposal) is an example of a report foretelling legislation.  The intended purpose of the DCMS Proposal is legal reform “to secure a pro-growth and trusted data regime.”  The DCMS Proposal is not the legislation to reform the UK GDPR, but it anticipates what might be seen later this year in a legislative proposal.  Chapter 1 looks at reducing barriers to responsible innovation.  The topics in Chapter 1 range from a clearer definition for research, broader consent, and simplifying legitimate interest as a legal basis.  Chapter 1.5 looks at AI and machine learning.  Specifically, Chapter 1.5 discusses fairness as it is related to AI, punting the details to a forthcoming paper on AI governance.  Chapter 2 is on reducing burdens on business and delivering better outcomes for people.  Like Canada’s C-27, the DCMS Proposal would create a requirement for organizations to have “privacy management programmes.”  These programs would mirror the mature accountability programs suggested in Singapore to take advantage of a flexible legitimate interest exemption to consent. There is growing agreement that modern processing is too complex to place the burden on individuals to govern data uses with their informed choices.  The GDPR is intended to be permissible uses based, but it still depends, to a great extent, on individual decisions to consent or object.  State legislation in the United States has tended to be data subject rights focused.  Neither approach is straight forward in governing probabilistic programming.   Both Canada and the United Kingdom are confronting that challenge, and that is a positive trend.  U.S. Congress Bill H.R. 8152 is due for a markup in the full House Energy and Commerce Committee, but specific language would need to be added to H.R. 8152 to match the efforts in Canada and the United Kingdom. The interplay in Canada between AI and privacy law, the pro-growth and pro-data environment in the United Kingdom, and the possibility of a privacy law in the U.S. and its similarity, if any, to developments in Canada and the United Kingdom will be explored more fully in an IAF Policy and Strategy session on July 21.  If you have not received an invitation to the call, send an email to info@informationaccountability.org .

  • The Overturning of Roe v. Wade Undermines the Right to Privacy

    The effect of the United States Supreme Court’s overturning of Roe v. Wade is not limited to the right to abortion and bodily autonomy.  Its consequences, as some have worried, are not constrained to the right to contraception, the legality of same-sex sexual activity, or the right of gay couples to marry.  It flatly undermines the totality of the implied right to privacy.  The majority opinion in Dobbs v, Jackson Women’s Health Organization , which overruled Roe v. Wade last Friday June 24, 2022, applied the 14th Amendment substantive due process test – whether the right is deeply rooted in the Nation’s history and tradition – and held the 14th Amendment does not protect the right to abortion.  In a previous blog , I wrote the values that make up the right to privacy are “deeply rooted in [our] history and tradition.”  The Writs of Assistance and the Quartering Acts enabled invasions by the British of American colonists’ most secluded space – their homes where they had the autonomy to make their own decisions and where their most private decisions were made.  These values, which are fundamental to the right to privacy, are encompassed in the First, Third, and Fourth Amendments, and the expression of these values in those Amendments means that under the substantive due process test, the 14th Amendment protects the right to privacy. By overruling Roe v. Wade, Dobbs calls into question the endurability of the right to privacy.  The Dobbs majority opinion says that “[n]othing in this opinion should be understood to cast doubt on precedents that do not concern abortion.”  It goes on to state that the right to an abortion cannot be justified by a purported analogy to the rights recognized in other cases or by appeals to a broader right to autonomy.  But Justice Thomas in his concurrence says the Supreme Court in future cases should reconsider all of its substantive due process precedents and should overrule the demonstrably erroneous ones.  In their dissent, Justices Breyer, Sotomayor, and Kagan point out that the rationale for the majority decision – the substantive due process test – provides no way to distinguish between the right to choose an abortion and a range of other rights, and therefore Dobbs threatens – indeed undermines – any number of other constitutional rights. Roe v. Wade held that the right to an abortion is encompassed within the right to privacy.  Only a small number of the Supreme Court’s privacy cases concern “decisional privacy” or the privacy rights the Court has found implied in the Constitution that protect the rights of adults to make decisions about activities such as reproduction and contraception.  Cate    Excluding Fourth Amendment search and seizure cases and Fifth Amendment self-incrimination cases, the other categorizable Supreme Court privacy cases are First Amendment and freedom of expression and association issues.  Id. Thus, by utilizing the substantive due process test to strike down the right to an abortion, as the dissenting opinion in Dobbs points out, the majority is undermining the implied right to privacy. The erosion of the right to privacy is alarming in an observational age.   In 2013, Time described this observational age as video cameras peering constantly from lamp poles and storefronts, satellites and drones floating hawkeyed through the sky, smartphones relaying a dizzying barrage of information about their owners to sentinel towers dotting cities and punctuating farm land, license-plate cameras and fast-pass lanes tracking the movements of cars, internet service providers storing searches containing personal data, retailers scanning, remembering and analyzing purchases by customers, smart TVs knowing what programs individuals are  watching, and smart meters knowing how much electricity, water and heat individuals are using. In the abortion context, this observational technology could become even more invasive.  For example, the Texas Heartbeat Act is enforced exclusively through private civil actions brought against any person who induces an abortion or aids or abets the inducement of an abortion (including paying for or reimbursing the costs of an abortion and awarding statutory damages in an amount not less than $10,000).  Will Texas women seeking abortions be tracked through license-plate cameras and fast-pass lanes when traveling from Texas to Mexico, through their internet searches for out-of-state abortion providers, through their orders and deliveries of abortion pills, and through data in their period tracker apps? Other states have tried to mimic the Texas law.  For example, Idaho passed Senate Bill 1309 which gives family members of a pregnant woman the right to sue if a medical professional performs an abortion after cardiac activity is detected (this law has been stayed by the Idaho Supreme Court).  With the Dobbs decision, most likely more Texas-like laws will be passed, raising the specter of invasive consequences for even more women. Leaving regulation of personal rights to the states in an observational age is illogical.  Data on the internet knows no jurisdictional boundaries.  Missouri proposed legislation would allow private citizens to sue anyone who helps a Missouri resident have an abortion – from the out-of-state physician to whomever helps transport a person across state lines to a clinic.  Short of tailing any woman who would be likely to obtain an abortion, observational devices – like geotagging within social media posts, utilizing facial recognition to identify women at out state abortion clinics, or even just using my “Find My iPhone”-esque location tracking – would be used to determine whether a woman had an out-of-state abortion.  But this problem is not limited to abortions.  What if it were illegal to gamble or drink under the age of 21 in a state and the state made it illegal to go out-of-state to gamble or drink?   What if it is illegal to purchase a gun in one state and the state makes it illegal to purchase a gun in a state where it is legal to do so?  The Biden Administration stated recently that laws making it illegal to go out of state to do what is legal in another state interfere with interstate commerce.  The prospect of an “interstate commerce case” brought by the federal government does not stop the chilling effect of these state laws. In an observational age, the right to privacy is more important than ever.  Instead of shoring up the right to privacy at this most important time, the Supreme Court is eroding it.  Where the majority of society disagree with the ruling of the Supreme Court, it is up to Congress to enact legislation.  The observational age has made privacy legislation long overdue in the United States.  Perhaps the anticipated ruling in Dobbs makes it no surprise that H.R. 8152, the draft American Data Privacy and Protection Act, is moving swiftly through Congress.  Only time will tell whether H.R. 8152 becomes law, but it is clear that the right to privacy needs protection, and with the makeup of this Supreme Court, Congress is the only branch of the U.S. government that can provide this protection.

  • The Right to Privacy is as Important as the Right to Use Guns

    Due to the recent mass shootings, especially those of shoppers at a Buffalo supermarket and of school children in Uvalde, there is much attention on the Second Amendment’s “right” to use guns in self-defense.  This renewed focus has caused me to wonder why so much energy is expended in protecting this Second Amendment “right” when, if the Justice Alito leaked draft opinion is the view of the majority of the U.S. Supreme Court, so little effort will be exerted in protecting the right to privacy. I suspect Justice Alito would say the difference is because the “right” to use guns in self-defense is an enumerated right and the right to privacy is not mentioned in the U.S. Constitution.  The “right” to use guns in self-defense is put in quotes because until the U.S. Supreme Court’s decision in the District of Columbia v. Heller in 2008, the Second Amendment was understood to concern the use of guns in connection with militia service.  In Heller, the U.S. Supreme Court held there was a limited individual right to keep and bear arms for defensive purposes. Justice Alito says in the draft leaked opinion that rights not mentioned in the U.S. Constitution must be “deeply rooted in this Nation’s history and tradition” and “implicit in the concept of ordered liberty.”  Although the words “right to privacy” are not expressly mentioned in the Constitution, the values encompassed by the right to privacy are entrenched in the Constitution, and they are “deeply rooted in this Nation’s history and tradition” and “implicit in the concept of ordered liberty.” Frederick S. Lane in his book American Privacy makes the case that the Bill of Rights’ concern with preserving autonomy and freedom is the functional equivalent of personal privacy. He argues that concern over the privacy of personal communication was one of the major issues that drove American colonists to rebel against England and points out that postal privacy was recognized in the Post Office Act of 1710 (it was a crime to open, detain or delay any letters or packets).  But, according to Lane, it was the Writs of Assistance and the Quartering Acts that gave rise to the First, Third and Fourth Amendments.  Lane describes these causes and effects as follows:  The Writs of Assistance were general, open ended search warrants that authorized revenue officers to collect taxes.  In particular, the Writs gave custom house officers essentially unlimited authority to randomly search sailing ships, dockside warehouses and even private homes for untaxed property.  These actions lead to the First Amendment’s protection of the autonomy of American citizens in making some of their most private decisions about their beliefs, thoughts, and associations and to the Fourth Amendment’s “right of the people to be secure in their persons, houses, papers and effects” and prohibition against unreasonable searches and seizures and against open ended search warrants. Under the Quartering Acts, the British troops were able to quarter in whatever nonprivate buildings were available, including stables, abandoned homes, and outhouses, and colonists could be compelled to provide, without compensation, British troops with necessities such as food, alcohol, candles, and bedding.  These Acts lead to the Third Amendment’s prohibition on the quartering of soldiers in houses in times of peace without consent and in times of war except in accordance with law. The Writs of Assistance and the Quartering Acts enabled invasions of the colonists’ most secluded space – their homes.  Their homes were where they had the autonomy to make their own decisions and where their most private decisions were made.  The protection of these values, which are fundamental to the right to privacy, are encompassed in the First, Third and Fourth Amendments, and the expression of these values in those Amendments means that the concept of the right to privacy is set forth in the U.S. Constitution. So, under any standard of scrutiny, whether the ones applied to enumerated or unenumerated rights, the right to privacy deserves protection with the same intensity as the right to use guns in self-defense.  Although the actual words “right to privacy” are not used in the Constitution, the values encompassed by the right to privacy are articulated in the Constitution, and they must not be given short shrift.

  • A Data Protection Bill with an Entirely Different Tone

    The Queen’s speech, which was delivered by Prince Charles on May 10th, contained the expected announcement that the UK government intends to reform its data protection law.  Rather than follow the EU path of saying the new law will “give people control over their data,” this announcement emphasized providing more value to society through rules that will create a culture of data protection and will free data to drive responsible innovation.  Tone is important.  Vivien Redding’s 2012 announcement of the GDPR said it would give people more control over their data and would be risked-based.  Over time, the risk-based approach has been marginalized while data subject rights have been in the forefront of GDPR implementation.  Let’s be perfectly clear.  We only have two pages of government notes and a sentence in a speech.  Let’s also be clear that the bill’s description has many elements that will create angst about regulator independence and risk to the UK’s tenuous adequacy status.  Translating big ideas into legislative language is a Herculean effort.  Most likely, the next document we will see is the government’s response to last Fall’s DCMS consultation.  At some point, legislative language will follow.  The IAF’s “risk of what” paper has shown that real balancing of risks and rewards has been wrung out of data protection and must be restored to make current and future technologies  and data forward business models work.  The EU GDPR will need to be changed to work hand in glove with Europe’s digital agenda and the associated data and AI acts that recently have been introduced. Tone is important.  Writing quality data protection law is extremely hard.  But in the end, data protection law needs to find a way to help organizations anticipate adverse processing consequences, both from processing and not processing data, and not just engage in endless check-the-box compliance.  The sparse two-page outline hints at a laboratory for the development of next generation data protection law.  The IAF team is very interested in the UK’s journey going forward.

  • IAF Testifies on Pre-Rulemaking for California Privacy Rights Act

    California set in motion a major milestone by creating a data protection authority, the California Privacy Protection Agency (“CPPA”) with the passage of the California Privacy Rights Act (CPRA).  California has the 5th largest economy in the world – for reference, behind only Germany among European economies.  California will be the first, but likely not the last, state data protection agency in the United States.  How this agency conducts rulemaking, builds capabilities and ultimately begins enforcement of the CPRA beginning in January 2023, will have a significant impact on how the digital future evolves.  The CPPA, like other California public entities, is bound by strict administrative and transparency regulations, which are intended to reduce the leverage large organizations and lobbyists have over how the CPPA will conduct its business.  The CPPA has a tiny staff, especially when compared to the UK ICO with over 900 employees (with both privacy and transparency responsibilities) or the Irish DPC with over 200 employees.  Yet, the CPPA has to write and finalize the regulations by mid-fall to put into effect many of the details of the CPRA.  However, the conduct of the CPPA isn’t just about finalizing and communicating to stakeholders about the rules. It is about how the CPPA will render the mixture between oversight of organizations and enforcement on behalf of consumers, the encouragement of good behavior and the punishment of prohibited behavior.  The CPPA takes on the mantle of weighing the balance between the human cost of over-processing data and the human cost of data processing that is over-restrained.  The current controversy at the Federal Trade Commission demonstrates how important it is to get this right. Last week, the CPPA held three days of public Pre-Rulemaking Stakeholder Sessions , where they took testimony from stakeholders as input to their rulemaking process.  The IAF has invested time in working with many privacy agencies.  Some of that work has been formal, such as the guidance documents created for agencies in Hong Kong and Bermuda.  Some of it has been less formal, such as digital universities in Colombia, Albania, Brussels, and the United Kingdom.  And still other work has been through submissions, such as the legitimate interest assessment developed for Europe.  The IAF’s Barb Lawler testified last Friday about the importance of risk assessments for data processing as an important governance tool for organizations.  The testimony focused on assessments and assessment processes as critical and necessary tools for organizations and which support the CPPA oversight and enforcement mandates.  That testimony may be found here .

  • There are Many Reasons to Worry About Data Transfers, but the Austrian DPA Second Google Analytics Decision Should not be one of Them

    Noyb’s posting that the “risk-based approach” to data transfers has been rejected is disingenuous.  Moreover, the Austrian Data Protection Authority’s second Google Analytics decision is poorly reasoned and is based on two outdated “facts.”  For these three reasons, the decision should not apply more broadly to today’s transfers of personal data: (1) The Austrian DPA’s conclusion that the risk-based approach is provided for in Article 24(1) and other enumerated provisions of the GDPR but not in Article 44 (or other articles in Chapter V) and therefore that “Chapter V of the GDPR does not recognise a risk-based approach” is flawed reasoning.  Article 44, which sets forth the general principles for transfers, does not stand alone; rather, Article 44 expressly says that it is “subject to the other provisions” of the GDPR.  Therefore, Article 44 is subject to Article 24(1) which requires the controller to identify the risks to the rights and freedoms of natural persons and to take into account the likelihood and severity of those risks in relation to the nature, scope, circumstances and purposes of the processing for each data processing operation.  Thus, Article 24(1) sets forth the fundamental obligations of the controller and makes assessing risk part of the accountability principle set forth in Article 5(2).  Since, pursuant to Article 24(1), the controller must assess the risk to natural persons of any data transfer, it is incorrect for the Austrian DPA to have concluded that Chapter V is not risk-based.             (2) The Austrian DPA found that the standard contractual clauses (SCCs) used by Google at the time of the 2020 transfer were the 2010 Standard Contractual Clauses (2010 SCCs).  This is significant because Clause 14 of the 2021 Standard Contractual Clauses (2021 SCCs) requires the parties to take into account “the laws and practices of the third country of destination – including those requiring the disclosure of data to public authorities or authorising access by such authorities – relevant in light of the specific circumstances of the transfer, and the applicable limitations and safeguards.”  Footnote 12 to this provision requires the transfer impact assessment which assesses the risk of such a transfer.  Footnote 12 provides in part: “As regards the impact of such laws and practices on compliance with these Clauses, different elements may be considered as part of an overall assessment.  Such elements may include relevant and documented practical experience with prior instances of requests for disclosure from public authorities, or the absence of such requests, covering a sufficiently representative time-frame.” Thus, the 2021 SCCs require that the parties conduct a type of risk-based analysis.  Furthermore, on 18 June 2021, the EDPB published the Final Recommendations on Supplementary Measures for International Transfers (EDPB’s Final Recommendations) which also allow the transfer of data to proceed if there is no reason to believe the legislation will be interpreted and/or applied in practice so as to cover the transferred data and importer.  In order to make this determination, a form of risk-based analysis must be conducted. The Austrian DPA found that the words “risk-based” were not used in Article V of the GDPR. In addition to the faulty legal reasoning discussed above, the facts and the law have changed since the 2020 data transfer.  Since the 2021 SCCs and the EDPB’s Final Recommendations were not at issue in the Austrian DPA’s second Google Analytics decision, that decision should be limited strictly to its facts and should not be read to apply to today’s data transfers more broadly.      (3) The DPA also found that Google could be required by U.S. Intelligence services to hand over complete IP addresses.  This conclusion was based on a review of Google documents online and entered into between Google and the user of Google Analytics.  In particular, the DPA focused on the two-steps of the IP address anonymization function: the full address of a website visitor initially is transmitted to Google and the IP address is masked in a second step after it has been received by the Analytics data collection network.  In so finding, the DPA quoted a query of the Google German website done on 18 March 2022.  An English translation of the German query also states: “IPs are anonymized or masked as soon as the data is received by Google Analytics and before it is stored or processed.”  Google’s U.S. website on 3 May 2022 described IP Anonymization as follows: A technical explanation of how Universal Analytics anonymizes IP addresses in Google Analytics 4, IP anonymization is not necessary since IP addresses are not logged or stored. When a customer of Universal Analytics requests IP-address anonymization, Analytics anonymizes the address as soon as technically feasible.  The IP-anonymization feature in Universal Analytics sets the last octet of IPv4 user IP addresses and the last 80 bits of IPv6 addresses to zeros in memory shortly before being sent to Google Analytics.  The full IP address is never written to disk in this case. Since Google Analytics does not store full IP addresses, today it cannot be required to hand them over to U.S. intelligence services.  Additionally, this IP Anonymization process demonstrates that under the assessment required by Footnote 12 of the 2021 SCCs, Google would not be prevented from complying with the SCCs and under the assessment required by the third step of the EDPB’s Final Recommendations, there is no reason to believe legislation will be interpreted and/or applied in practice so as to cover Google (because it does not have the IP addresses to transfer).  Again, the Austrian DPA’s second Google Analytics decision should be limited to its facts and should not be read to apply to today’s data transfers more broadly. Every time data is processed, much less transferred, there is some level of risk to one or more stakeholders.  That is why organizations do risk assessments.  Organizational risk assessments define the likelihood of varying risks and the magnitude of the impact of those risks.  By its very nature, the entire GDPR is risk-based, not just certain articles.  If the warranties made by the parties to the 2021 SCCs are fulfilled and the assessments required by 2021 SCCs are conducted with competency and integrity, the Austrian DPA’s second Google Analytics decision should not hinder today’s data transfers.

  • The Shape Shifting Nature of Privacy Risk Management

    For the past decade, the privacy ecosystem, company CPOs, policymakers, regulators, academics, advocates, the courts and others have been using the term “risk-based” without any clear definition of what that term should mean in practice.  The EU General Data Protection Regulation (GDPR) was the first privacy/data protection law that, from its inception, was intended to be risk-based.  EU Commissioner Vivien Redding said so when she introduced the regulation in 2012.  Ten years later, it is crystal clear that while risk-based was the GDPR’s intent, there was no real discussion of “risk of what?”  This failure is not just a European issue, since almost every privacy law since the GDPR went into effect has been framed as risk-based.  Risk management requires a definition of the negative outcomes to be avoided or mitigated.  If there is a disagreement on what is at issue, then there will be consequences that, at the minimum, will lead to wasted allocation of people, time, and resources. In the Spring of 2021, the IAF endeavored to answer the question, “risk of what?”  This endeavor culminated in a September 2021 workshop.  Since the workshop, the IAF has continued to try to answer this fundamental question.  The IAF’s interim report, “ Risk of What ,” issued on April 2, 2022, provides some needed context to this question.  However, it is an interim report because the more the IAF delved into the “risk of what?” question, the more it became apparent that the answer to the question became like the “Boggart” from “Harry Potter and the Prisoner of Azkaban” – a shape-shifting creature that assumes the form of whatever most frightens the person who encounters it.  Each individual stakeholder’s biggest fears frame the answer to “risk of what?” The IAF observed that there often has been confusion on whose Boggart should be addressed by the party charged with managing risk, whether controller or processor, regulator, news reporter, advocate or academic.  Today, data protection and privacy management are intended to be risk-based in their execution.  So, the identification of risk impacts how a privacy program is structured and implemented, how that program is overseen, and how individual issues trigger investigations or enforcement actions. The same is true for data and cyber security programs. The IAF’s hypothesis was that the failure to gain a consensus on the risks to be managed has led to a less than optimal implementation of a risk-based approach to data protection and that creating a consensus on the defined risks would lead to prioritization and then strategic management of them. The “Risk of What?” project and subsequent developments showed that this hypothesis was much too simple an answer.  So, this is an interim report.  For today, the IAF staff believes that the answer to “risk of what?” is based on negative outcomes to be avoided and therefore that identification of “adverse processing impacts” offers promise.  The term “adverse processing impacts,” is defined in the IAF’s model legislation, the FAIR and OPEN USA ACT : “Adverse Processing Impact” means detrimental, deleterious, or disadvantageous consequences to an Individual arising from the Processing of that Individual’s Personal Data or to society from the Processing of Personal Data, including— direct or indirect financial loss or economic harm; physical harm, harassment, or threat to an Individual or property; psychological harm, including anxiety, embarrassment, fear, and other mental trauma; inconvenience or expenditure of time; a negative outcome or decision with respect to an Individual’s eligibility for a right, privilege, or benefit related to— a. employment, including hiring, firing, promotion, demotion, reassignment, or compensation; b. credit and insurance, including denial of an application, obtaining less favorable terms, cancellation, or an unfavorable change in terms of coverage; c. housing; d.  education admissions; e.  financial aid; f.  professional certification; g.  issuance of a license; or h.  the provision of health care and related services. 6. stigmatization or reputational injury; 7. disruption and intrusion from unwanted commercial communications or contacts; 8. discrimination in violation of Federal antidiscrimination laws or antidiscrimination laws of   any State or political subdivision thereof; 9. loss of autonomy 4through acts or practices that are not reasonably foreseeable by an Individual and that are intended to materially—  i. alter that Individual’s experiences; ii. limit that Individual’s choices; iii. influence that Individual’s responses; or iv. predetermine results or outcomes for that Individual; or5      10.  other detrimental or negative consequences that affect an Individual’s private life, privacy affairs, private family matters or similar concerns, including actions and communications within an Individual’s home or similar physical, online, or digital location, where an Individual has a reasonable expectation that Personal Data or other data will not be collected, observed, or used.   “Adverse processing impact,” as defined in the IAF’s model legislation, is broad enough to encompass each stakeholder’s Boggart.  It is flexible enough to be useful in a full range of contexts, and it is specific enough to be useful in setting controls and objectives for privacy by design and creating links to the full range of human interests, and possibly the basis for oversight and enforcement.  Over the next year, the IAF will explore, with partners, how this answer to the “risk of what?” question can be better framed in terms of enterprise risk management, privacy by design engineering, and oversight based external standards and protections.  Please join the work.

  • The Right to Data Sharing for Security is a Fundamental Human Right

    Russia’s unprovoked invasion of Ukraine reinforces the fact we live in a very complicated and dangerous world.  This attack on an independent European country, Russia’s willingness to unleash violence on civilians, and the Government’s fabrication and dissemination of propaganda, further underscores the critical and increasing role data, surveillance, and intelligence activities will play in this dangerous world. This literally is a matter of life or death. In the weeks leading up to the invasion the United States government was making public pronouncements forecasting the invasion based on USG interpretations of intelligence data.  It is likely that additional, more sensitive data was being shared between allies.  Which is certainly appropriate.  Odds are that insights from bulk analysis of data continues to be shared between NATO member countries. Again, very appropriate. Yet last week, EU Executive Vice President Margethe Vestager said that reaching an agreement with the U.S. on data flows would not be easy, “given the fundamental legal clash between European privacy rights and U.S. surveillance overreach.”  In the current geopolitical environment that statement was not helpful.  It certainly was poorly timed.  From our perspective, when compared with the actions of hostile global actors, we don’t see a fundamental clash at all. Rather, we see allies who share common values and a deep respect for the same fundamental rights. If there’s a gap between our respective approaches, it’s more of a crack than a canyon. There is universal agreement in the West that the protection of life and security of the person are fundamental, individual human rights. You can look to Article 3 of the UN Universal Declaration of Human Rights, Article 6 of the Charter of Fundamental Rights of the European Union, or the Canadian Charter of Rights and Freedoms for clear examples of this deeply embedded commitment to life and security. In the data protection sphere, these fundamental rights are tested and challenged every time communications networks are accessed, and data is used by public actors. Companies invest heavily in cyber security and information technology to secure their data and safeguard these individual rights. They do this for the benefit of the individual, the organization and society. Signals intercepts and data analysis are not just a government interest. The security of information and safeguarding of individual privacy requires that commercial actors monitor their networks and the flow of data for anomalous activity and signs of malicious activity. There’s no choice. And as we’ve learned over the past half decade, cyber security works best when threat data is shared among companies who are engaged in an increasingly dangerous battle with rapidly evolving threat actors. Data sharing is imperative to the efficient and effective prevention of cyber-attacks. And when necessary, similar data sharing takes place between allied government agencies under defined intelligence sharing agreements. Does anyone doubt that such data sharing is necessary now? Experts predict that as the war with Ukraine continues, Russia will again turn to offensive cyber warfare activities as they have in the past. We need only look to the recent SolarWinds cyberattack by the Russian Foreign Intelligence Service for proof of their capabilities and intentions. Recital 4 of the EU General Data Protection Regulation is on point: “ The processing of personal data should be designed to serve mankind. The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality .”  The fundamental right to life and security of person is an individual right shared by all in the western world.  It should be taken into consideration in relation to its function in society and be balanced against other fundamental rights a proportional manner when considering the risks related to data transfers and cyber intelligence information sharing.  We are in no way suggesting there should not be safeguards built into national and cyber security systems and information sharing agreements.   The IAF said there should be accountability requirements for intelligence agencies in 2014.  Instead, we would suggest that the fundamental right to life and personal security should be part of data protection proportionality equation. This blog was prepared by the IAF policy team and does not necessarily reflect the views of the IAF Board of Trustees or members.

  • We Honor our Colleagues on International Women’s Day

    Today is International Women’s Day, and we want to recognize the women who make the IAF strong, with a special focus on a long-standing advocate for our work.  This organization is graced with many talented women, both on the Board — Sheila Colclasure, Stephanie Higgins, Michelle Richardson, Laura Sherren, JoAnn Stonier, Della Shea and Pam Snively; and our staff — Stephanie, Lynn, Candy, Nancy, and Barb.  One woman deserves a special shoutout.  Jennifer Glasgow was a member of the privacy profession before there was such a thing as the privacy profession.   We began working with each other in the early 1990’s.  When we created the first business process oriented thinktank, she stepped forward first – as the CPO of Acxiom, she was a founder of CIPL.  She was one of the three individuals that said it was time for the global accountability dialog to be incorporated as the IAF, and signed the incorporation papers as treasurer.  Today Jennifer we salute you as a pioneer in the field of privacy and as a founder of the IAF.

  • Verified Governance is Coming Like a Runaway Freight Train

    There is growing evidence that data forward processes will require governance that considers impacts not just to privacy and cyber security but also considers inappropriate discrimination, precluded opportunities, and fair practice for potential competitors.  “Data forward” means data is becoming a larger and larger enabler of business strategies, but only some of these organizations are starting to invest in governance processes that match their data forward plans. Being data forward exacerbates the cyber security deficit.  More observation and processing increase the touch points for cyber incursions, and more use of artificial intelligence (AI) introduces new classes of cyber risk. For example, the Algorithmic Accountability Act, S.3572 and HR.6580 , have been introduced in the United States Senate and House of Representatives.  Both bills would require impact assessments that go well beyond the approach and capabilities of the majority of the organizations that are looking to be data forward.  Last week the UK Competition and Markets Authority (CMA) reached an agreement ( press release ) with Google related to Google’s removal  of third-party cookies from the Chrome browser.  Google’s blog makes it clear that any changes made regarding its browser will apply to its advertising products. The agreement includes assessments that balance the interests of third parties and individuals as well as competitive markets and includes a required Monitoring Trustee who will work alongside the CMA and will be central to Google’s compliance. Yet, one of my colleagues was asked by a consulting firm executive why consulting firm client services executives aren’t being asked for tools to build more advanced governance?  Good question.  Maybe the answer is the people they are talking with are doing a sprint to catchup with legacy compliance requirements and are not yet asking the questions required for data forward governance. GDPR and California have had some significant lessons for the privacy and cyber security fields.  Among them are that the tools needed to fulfill data subject rights and manage international transfers are much more difficult and expensive to build and implement than anyone anticipated.  The new lesson in emerging market demands is that organizations need to be more anticipatory relative to governance approaches and need to start building the change well before the requirement is specific.   The failure to recognize these needs is creating a governance deficit. In conclusion, data policy management executives, which is what the best privacy officers truly are, are too busy catching up with the past to pay attention to the future.  Similarly, CEOs that don’t anticipate this attention deficit will be making interest payments on the governance deficit that accompanies the adoption of data forward strategies that don’t include forward looking governance. Organizations interested in avoiding the governance deficit that is already beginning to happen should participate in the governance discussion DLA Piper, The Providence Group, and the IAF are holding April 11 in Washington D.C.    Join us April 11.

  • Stewardship Is Still the Key

    The ability for iPhone users to control tracking will have a $10-billion impact on Facebook’s future revenue says Meta.  This impact demonstrates the power of individual control, is a clear example of where individual control works, and why individual control will always be part of data governance.  However, the simple “yes” or “no” used in phone apps cannot govern all the responsible and answerable uses of data pertaining to individuals.  Instead, the fair processing of data pertaining to individuals is dependent on the duty of care that comes with data stewardship.  And data stewardship is dependent on decisioning criteria that comes from democratic institutions.  Individual control dates back to 1967’s “Privacy and Freedom” by Alan Westin.  It has been nurtured by the OECD Guidelines and is the preferred approach of the GDPR, as the impediments to legitimate interests demonstrate.  Individual control is the approach taken in state privacy laws enacted over the past few years and the ones that will be enacted by state legislatures this year.  It is also the key instrument of the new “data dignity” movement championed by the Ethical Tech Project .  But data use is complicated. What is appropriate is contextual, and the individual is not the only concerned stakeholder.  So, in the end, fair processing of data pertaining to individuals is dependent on the duty of care that created the basis for data stewardship.  That duty of care must come from legislation that has clear criteria so that data services individuals and so that potential adverse consequences are understood, mitigated, and managed.  Those criteria are part of the IAF FAIR AND OPEN USE ACT and may be found in the Act’s definition of “Adverse Processing Impacts.”  If you don’t have time to read the model legislation, just read the definition which can be found here .

bottom of page