top of page

Search Results

95 results found with an empty search

  • Enacting Privacy Legislation Requires Defining Desired Obtainable Outcomes

    There is little debate that the United States needs a comprehensive privacy law. There also is little debate that the U.S. is no closer to enacting such legislation than it was twenty years ago. Many have argued that the issues of federal preemption and private rights of action are the impediments to the enactment of such legislation.  If the rest of the legislation were agreed on, I think those issues are solvable ones.  The fundamental problem is what privacy legislation actually should try to solve.  If the purpose of legislation is to protect against negative outcomes, then those outcomes need to be identified.  So, what is the purpose of comprehensive privacy legislation?  I read Neil Richards new book “Why Privacy Matters” over the holiday break.  He makes the case that privacy does matter because it is about the power that comes from knowledge generated from human data and who wields that power.  After reading the book, I went back and listened to NTIA’s listening session related to protected classes and advanced analytics.  There is general agreement that individuals cannot govern the power that comes from human data by reading informing notices.  Richards also suggests that the term “abusive” join the terms “unfair” and “deceptive” in the tools that the FTC has to work with.  So, what does “abusive” mean?  In thinking about that issue, I went back to the IAF’s model legislation, the FAIR and OPEN USE ACT.   The footnotes in the model legislation link the model legislative language to the sources for much of that language.  The footnote to the term ADVERSE PROCESSING IMPACT states:   “The IAF Model does not use the terms “harm” or “injury.” Instead, the IAF Model defines a broad concept of “Adverse Processing Impact.” The definition of Adverse Processing Impact aligns with the approach to privacy risk and “privacy problems” codified in the National Institute of Standards and Technology’s publication, NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, Version 1.0 2020 (“NIST Privacy Framework”). NIST defines privacy events as “potential problems individuals could experience arising from system, product, or service operations with data, whether in digital or non-digital form, through a complete life cycle from data collection through disposal. NIST Privacy Framework at p, 3. NIST identifies the range of problems an individual can experience as a result of processing as ranging from dignity-type effects such as embarrassment or stigmas to more tangible harms such as discrimination, economic loss, or physical harm. Id. The definition of Adverse Processing Impact is also generally consistent with NIST’s Catalog of Problematic Data Actions and Problems , which is a non-exhaustive, illustrative set of problematic data actions and problems that individuals could experience as the result of data processing.” I suggest that the term “adverse processing impact” as defined and used in the model legislation describes the negative outcomes that we, as a society, want to manage and prevent.  ADVERSE PROCESSING IMPACT.— The term “Adverse Processing Impact” means detrimental, deleterious, or disadvantageous consequences to an Individual arising from the Processing of that Individual’s Personal Data or to society from the Processing of Personal Data, including— direct or indirect financial loss or economic harm; physical harm, harassment, or threat to an Individual or property; psychological harm, including anxiety, embarrassment, fear, and other mental trauma; inconvenience or expenditure of time; a negative outcome or decision with respect to an Individual’s eligibility for a right, privilege, or benefit related to- employment, including hiring, firing, promotion, demotion, reassignment, or compensation; credit and insurance, including denial of an application, obtaining less favorable terms, cancellation, or an unfavorable change in terms of coverage; housing; education admissions; financial aid; professional certification; issuance of a license; or the provision of health care and related services. stigmatization or reputational injury; disruption and intrusion from unwanted commercial communications or contacts; discrimination in violation of Federal antidiscrimination laws or antidiscrimination laws of any State or political subdivision thereof; loss of autonomy [1] through acts or practices that are not reasonably foreseeable by an Individual and that are intended to materially- alter that Individual’s experiences; limit that Individual’s choices; influence that Individual’s responses; or predetermine results or outcomes for that Individual; or [2] other detrimental or negative consequences that affect an Individual’s private life, privacy affairs, private family matters or similar concerns, including actions and communications within an Individual’s home or similar physical, online, or digital location, where an Individual has a reasonable expectation that Personal Data or other data will not be collected, observed, or used. I also suggest that in managing against adverse processing impacts, we create the means to use human data in a flexible manner to create real value for people.  So please think about the definition of “adverse processing impacts” and think about how you would create a risk management program to manage against the risk of adverse outcomes.   January 27 the IAF will hold a special Privacy Week session of our monthly “Strategy and Policy Call” about adverse processing impact.  Look for the Save the Date.  [1] The concept of “loss of autonomy” is widely recognized in many bills and frameworks including the NIST Privacy Framework, which provides that, “[l]oss of autonomy includes losing control over determinations about information processing or interactions with systems/products/services, as well as needless changes in ordinary behavior, including self-imposed restrictions on expression or civic engagement.” Catalog of Problematic Data Actions and Problems . [2] The IAF Model applies the well accepted drafting convention that “or” means “either or both”, or if there is a series of items, “anyone item or combination of items”.

  • A Pivotal Event in Data Protection Law

    The recently completed UK Department for Digital, Culture, Media & Sport (DCMS) Consultation on “ Data: A new direction ”  on revising the UK GDPR is a watershed moment.  The Consultation looks at whether GDPR based laws are truly serving the needs of an information society.  The DCMS forward states: “Our ultimate aim is to create a more pro-growth and pro-innovation data regime whilst maintaining the UK’s world-leading data protection standards.” The consultation made tough suggestions on how the regulation might be improved. The IAF staff thought some DCMS suggestions were wise and believed some others would not be fully productive.  The IAF filed comments as part of that Consultation. The Consultation discussed scientific research, reiterated that research is a compatible purpose and suggested a specific mechanism to create a legal basis for that compatible purpose. In its comments, the IAF suggested: A two-phased approach: knowledge creation – the generation of insights about individuals in general – and knowledge application – utilization of knowledge created on a specific set of individuals.  Our comments focused on creating a pathway forward for using data to create new knowledge through the data analytics process, consistent with the Consultation’s premise.  Private sector research, knowledge creation should also be recognized as a legal basis.  To support this suggestion, the IAF used its comments on the Consultation to highlight that two-phased approach to advanced analytics (and AI), knowledge creation and knowledge application, have different impacts.  There is no impact on individuals from knowledge creation while there is an impact on individuals from knowledge application from actually using the knowledge to make decisions pertaining to a specific individual.   These comments are consistent with the comments the IAF made on the EU draft AI Regulation. We also commented on the Consultation’s suggestions that legitimate interests as a legal basis should include a limited set of legitimate interests that would not require a balancing test. The IAF suggested the impediment to legitimate interests was not the balancing test but rather the transparency responsibilities related to the right to object. Our comments also included a discussion on regulator cooperation related to fair outcomes versus fair processing.   The Consultation suggested regulators other than the Information Commissioner should have jurisdiction on fair outcomes, such as discrimination in lending.  We suggested the ICO’s jurisdiction should be specific to assuring processing is fairly done to achieve fair outcomes and that the definer of what is a fair outcome should be left to other regulators with jurisdiction over those issues. Lastly, the IAF agreed with the Consultation that a specific requirement for comprehensive accountability programs, such as found in Canada and Singapore, made sense.  However, we pointed out that the removal of requirements to conduct assessments is not necessarily prudent. Our comments are consistent with the provisions of the IAF model legislation, the FAIR AND OPEN USE ACT.

  • AI Impact Assessments Are Necessary and Additive to Existing Business Processes

    A paradox exists due to the growing use of artificial intelligence (AI) enabled technology.  The adoption of AI by more and more organizations increasingly has led to the recognition that AI has some unique challenges and risks or, at the very least, unintended consequences.  At the same time, these organizations are lagging relative to building and adopting enhanced governance processes required to manage this risk and achieve the responsible use of AI. As Kay Firth Butterflied noted in the recent World Economic Forum paper, Building an Organizational Approach to Responsible AI ( mit.edu ) , “To engineer successful digital transformation with AI, companies must embrace a new approach to the responsible use of technology. . . . AI differs from many other tools of digital transformation and raises different concerns because it is the only technology that learns and changes its outcomes as a result. Accordingly, AI can make graver mistakes more quickly than a human could.” AI’s capacity increases or amplifies the risk due to its speed and scale. Why is there this lag in governance? As PWC’s Gain trust by addressing the responsible AI gaps  reported, over 60% of respondents do not have an ethical AI framework nor have they incorporated the ethical principles in day-to-day operations. Only 12% of companies have their AI risk management and internal controls fully embedded and automated; 26% of companies have an enterprise approach that has been standardized and communicated; the rest have a siloed or non-standardized approach to AI risk management. One explanation is that implementation is hard. Yet, this “explanation” is not a good excuse. As Tim O’Brien  noted in his recent LinkedIn post, AI Ethics Has a Surplus Problem , there is an ever increasing supply of content  and knowledge about  potential harms and how to mitigate them. There are open source tools for everything from detecting/mitigating bias in machine learning to bringing intelligibility and explainability to opaque models for example tools available at Responsible AI Resources – Microsoft AI . There are countless free frameworks and coursework to support implementation of ethical practices from, for example,  fast.ai and Santa Clara University’s  Markkula Center . It may be that implementation just is hard, or another factor may be that many of these approaches are not written through the lens of how organizations actually would implement them. A complementary aspect of implementing responsible AI is the use of impact assessments. Academics, NGOs and some policymakers increasingly are recommending using Algorithmic Impact Assessments (AIAs) as part of enhanced governance systems to assess the potential benefits, risks, and controls with the goal of achieving responsible and ethical AI. An AIA is another name for a more expansive impact assessment as outlined in the report of the Information Accountability Foundation (IAF), AI and the Road to Expansive Impact Assessments. AIAs are broader than, for example, what is required in Article 35 of the GDPR relating to Data Protection Impact Assessments (DPIAs) and what is outlined in the EU’s proposed AI Regulation relating to conformity assessments.  Increasingly, there are AI-specific proposed bills in the U.S. and globally, and there has been much governmental debate and investigation as to how to govern the impacts (risks) of AI. These bills universally contemplate some form of impact assessment. However, to date, there is no consensus on the structure and focus of an AIA, although many NGO organizations have released high-level proposals for structure. These high-level proposals, though valuable to the discussion around AIAs, have not addressed all key elements for a successful assessment. Nor have they been written from the vantage point of how business processes work. The IAF has had a lot of experience in collaborating with both business and other stakeholders in designing assessments that go beyond what might be found in a simpler Privacy Impact Assessment. This work was first introduced with the Big Data Ethics Initiative: Assessment Framework (Part B) and then added to as part the work with the Hong Kong Privacy Commissioner  in addressing Enhanced Data Stewardship EDIA . More recently, work in Canada, A Path to Trustworthy People Beneficial Data Activities , explored a practical way to address this specific data use scenario. A logical extension of this work and the study of demonstrable accountability at leading organisations lead IAF to develop a Model AI Impact assessment (AIA), a key part of enabling the responsible use of AI.  The full account, including a sample assessment, can be found in the IAF report Evolving AI Impact Assessments (AIA) . This report also highlights related governance capabilities that would help enable the application of responsible AI.  IAF, in particular  thanks Ilana Golbin, Director of Responsible AI at PWC for her  strong and extensive contribution to this material. How can organizations move forward? These assessments and governance models need to be adapted to fit into an organization’s existing processes and business models as well as their strategy and overall business objectives. Properly designed and implemented, AIAs offer a mechanism to address the complexities of AI solutions, including the ethical and legal aspects and the full range of impacts to all stakeholders. They can help the organization’s own goals of mitigating reputational damage and achieving trusted provider status with their stakeholders. AIAs also would serve the broader objectives and cover the regulatory aspects of requirements such as DPIAs.  However, as organizations increase their use of AI, AIAs and associated enhanced governance will be required to satisfy the demands for the ethical and trustworthy adoption of AI.

  • UK Alters the Trajectory for Global Data Protection Objectives

    The UK Government’s consultation, “Data:  A New Direction” (“Consultation”), reframes the objectives for data protection in a manner not suggested for twenty yearsandchanges the trajectory for global data protection’s stated objectives.The Consultation’s “ultimate aim is creating a more pro-growth and pro-innovation data regime whilst maintaining the UK’s world-leading data protection standards.”  Other jurisdictions have similar goals but are approaching them differently.  The EU has put forward an ambitious digital plan but has not indicated it will amend the underlying data protection law that inhibits knowledge creation – the development of insights.  Singapore has challenged conventional wisdom by designing a pathway to more exemptions to consent to enable innovation.  Canada has proposed a digital agenda that requires freer data use but has not addressed the necessary corresponding data protection adjustments.  When considering the question “risk of what?” the Consultation states the opportunity cost that comes from not framing safe innovative data use as an explicit objective for data protection is a risk that must be managed.  The GDPR implies that data protection includes the full range of fundamental rights, but in practice, the GDPR largely has been interpreted in a way that has focused on procedural harms related to transparency and data subject rights.  The push for ethical assessments by the IAF and others has focused on the impact on stakeholders related to the full range of rights and interests.  But the sweet spot for harmonization between rights to control and benefits of growth and innovation has not led to a balanced discussion.  The last real debate on the balance between growth and privacy took place in Rome nearly twenty years ago.  The conference was suggested by the Italian government and was organized by the Garante under the leadership of Stefano Rodota.  The purpose was to explore the balance between economics and privacy.  The distinguished speakers included Congressman Cliff Stearns and two FTC Commissioners, Orson Swindle and Mozelle Thompson.  The last panel of the conference was composed of Rodota, Thompson and the founder of European data protection, Professor Spiros Simitis.  Thompson discussed the balancing points between economics and privacy.  Simitis, on the other hand, defended privacy and data protection as rights fundamental to freedom and thought it was crass to even suggest economics should be factored into the equation.  Rodota declared Simitis the winner of the debate, and economics has not been an element of data protection since then.  Consequently, persons in the EU data protection field have been cautious since that conference to suggest that economic growth has fundamental value to people that should be part of the data protection equation. Pro-growth pro-innovation data protection standards must deter edge riders from using the term innovation to defend obnoxious and intrusive behavior.  The ability to use data for innovation should require substantial safeguards.  One of the results of the Consultation should be what constitutes appropriate safeguards. The Consultation has its rough points, and it is a long way from Consultation to enacted legislation.  The debate, closed for twenty years, has been reopened explicitly by the UK government.  The IAF “Risk of What” Workshop is 22 September, and one of the five questions to be explored explicitly is about the Consultation: The UK Consultation, Data: A New Direction, brings front and center data innovation by design as a co-objective with data protection by design a risk mitigation process.  How does this objective get translated into policy? The potential outcomes that drive data risk management must be articulated clearly.  Too many policy leaders and regulator statements refer to intrusion on privacy when privacy is an undefinable term.  Intrusion into seclusion is real.  The inability to effectively exercise rights is real.  Intentional obscurity of complex processing is real.  The cost to society from preempting knowledge creation and research is real as well.  The Consultation puts on the table the opportunity cost from ignoring safe innovative data use as a risk that must be managed.

  • Does the Conduct of Observation Merit Dedicated Legislation?

    Do pervasive watching and recording cause risks to individuals and society that justify new legislation specific to this risk?   Digital age legislation in the 21st century, to be effective, must target specific human interests that currently are, or in the future will be, at high risk of being abused.   Without understanding the actual or highly likely wrongs to be fixed, legislation misses the target.  Complaints from both organizations and individuals that the EU General Data Protection Regulation (GDPR) and California’s new privacy laws create huge burdens and complexity, with limited improvements or benefits for individuals, are indicative of this problem.  Saying privacy needs to be improved is insufficient.  The links to harms need to be identified and stated.  European policymakers promised that the GDPR would be risk based, and only now is the question “risk of what” arising.  The Information Accountability Foundation (IAF) is holding a workshop addressing “Risk of What” on September 22. In preparation for that workshop, the IAF team has been exploring numerous ways of homing in on the intrusions and potential risks and benefits in a highly observational world that drive the predictive sciences which in turn lead to decisions impacting individuals.  As the team talked and wrote, the IAF reduced to five the topics that would help the IAF community focus on the fundamental question of “Risk of What.”  One of those topics is what boundaries must exist where observation is central to the way products and services actually work and are delivered.  This concern with observation is not new.  In the United States, Section 652 of The Second Restatement of the Law of Torts sets forth the privacy tort, Intrusion Upon Seclusion, which has been adopted by most states in their case law.  The European Union Charter of Fundamental Rights protects freedoms, including Article 7’s “Respect for Private and Family Life.”   However, to take advantage of these protections, the individual needs to be aware of the observation. The IAF team’s exploration of the question of “Risk of What” led them to question whether a generation of accelerating detailed observation of individuals by both the public and private sectors has led to a new question – whether a law, dedicated to setting the boundaries for watching individuals and generating data about those individuals, is necessary?   This question is being asked in the context of: (1) more complex processing where individual control is less effective in governing the market and fair processing is the most likely objective for privacy law, and (2) observation becoming more and more central to how things function (e.g., defibrillators and pacemakers are imbedded in people’s chests, sensors in cars people drive report back to car manufacturers, and home smart devices create shopping lists, track the homeowners’ sleep, and re-order TV channels based on who is watching).   Thus, to answer the question “risk of what,” the key questions on observation that will be asked on September 22 are: What is out-of-bounds observation, and does it merit its own statutory law — a law specific to observation?  Many contemporary technologies (e.g., mobile smart phones and apps, smart medical devices, smart home devices, water metering devices, and smart cars) require observation to work and improve.  Does this need to observe make legislation nuanced? Is a possible approach legislation which prohibits some observation and limits the application of much of the data that comes from observation? Observation done by highly targeted ads have made individuals skittish about the number of organizations that “watch” and track them and fearful about where else that data may be reused or sold.  At the same time, they are anxious that only a few organizations seemingly dominate observation and turn that observation into data which then are transformed into information, insights and finally action. The IAF’s model legislation, The FAIR and OPEN USE Act , does not directly address issues of observation and tracking.  Most privacy laws treat these issues as simple data collection and minimization issues.  California addresses the issues as matters of online tracking and data sales.  To be effective, the law should begin with the basic questions of ‘where can I watch’, ‘what can I record’, and lastly ‘are there boundaries with what I can do with what I have seen and recorded.’  Privacy laws usually deal with part of the last issue, what can I do with the recorded data, but usually do not address the rest of the issues.  The process of turning observation into data and data into information also leads to knowledge.  Knowledge, for better or worse, drives mankind forward.  The question is whether a separate law is needed on the extent of the permissible watching and recording of individuals as part of a harms-based risk avoidance system of governance? The IAF is uncertain of the answer but believes the question needs to be asked.

  • When Data Protection Is Not About Data Protection

    Data protection law is not always about data protection or privacy; sometimes it is just about power.  Last week I was bothered by tweets that touted a “momentous day” in data protection because China had enacted the Personal Information Protection Law (PIPL).  Parts of PIPL deal with the rights to privacy and data protection, but PIPL also is about the government’s desire to assure its version of social harmony by gaining a monopoly on observation.  That monopoly comes from controlling heavily what the private sector may do, creating a nexus for enforcement, and not touching the government’s massive powers to observe.  In this case, the monopoly on observation amounts to surveillance.  According to the dictionary, the difference between observation and surveillance is that observation is the act of observing and the fact of being observed, while surveillance is close observation of a person or a group of persons under suspicion.  News articles are clear that the PIPL covers the private sector only.  PIPL does nothing to reign in the state’s ability to watch and monitor people constantly and to turn the resulting raw data into social scores that may well determine the education people receive or the jobs they might hold.   Now there is a law – PIPL – that controls how the private sector collects data and uses that data to create knowledge about people in China, but there is no law in China that controls how the Chinese government uses that data.   I personally have been working on privacy in China since 2005 when I began organizing a conference with a Chinese university.  So why was PIPL adopted now?  In looking for reference materials on PIPL and why it was enacted, the best article I found was Stephen Bartholomeusz’s opinion in “The Sydney Morning Herald” entitled “Billionaire crackdown:  China’s risky new pathway to Mao’s ‘common prosperity.’”  A biography on Chinese President Xi Jinping also adds context on the questions of why and why now?  The Chinese leadership has figured out the pathway to wealth and power is observed data that is translated by advanced analytics into actionable insights.  This conclusion is not new.  FTC Commissioner Rebecca Slaughter reached the same result in her paper “Algorithms and Economic Justice.”  A big difference is that the Chinese leader has the power to act on the insight and not be tempered by competing authorities. Xi’s tenure as president has been an endless consolidation of power.  He first took on official corruption, then dissidents, and now the Chinese tech powers.  Social harmony requires the reduction of their wealth and the tempering of their power.  PIPL has some very interesting provisions.  It requires there to be legal permission to process data, and those permissible purposes include human resources data.  However, the legal bases are narrower than those in the GDPR and very much narrower than those in the IAF model privacy legislation .  For example, knowledge creation, the creation of insights, is not a stated legitimate use of data.  The result is that the private sector’s ability to use data for insight development is governed by consent, and regulators can always challenge the effectiveness of consent, thereby limiting the fruits of observation. Recital 4 of the GDPR reminds us that data protection is a human right that needs to be balanced with other rights.  GDPR protects both individuals and data users.  Under this construct, data users may argue the legitimacy of a data use to an independent authority and a very independent judiciary.  There is nothing independent about the agencies that will enforce PIPL, and the Chinese judiciary is answerable to the Chinese Communist Party. So, data protection law has added 1.3 billion Chinese to its domain.  But please do not think that this is a momentous day for effective and fair data governance.  It is not.

  • Summary of IAF Comments on Proposed EU AI Regulation

    Using AI to enable the creation of  better insights and for better predictive capabilities a is a clear objective of the European Commission’s proposed Regulation laying down harmonized rules on artificial intelligence (“AI Regulation”).  Fundamental rights and interests, such as employment, health, education, and the ability for smaller businesses to find a competitive advantage, are impacted by trusted digital innovation that the Commission is attempting to foster.  The Information Accountability Foundation submitted comments to the Commission on 28 July 2021 with the clear purpose of fostering this innovation necessary to serve the full range of fundamental rights and interests for Europeans, and for the AI Regulation to be an example to the rest of the world.  It is the IAF’s view that several areas of the AI Regulation require improvements for it to achieve its objective. The IAF’s comments were split into two parts: The first section addressed the interface between the objectives of the AI Regulation and the GDPR.  AI conducted with personal data is first covered by the GDPR, and if prohibited or challenged by the GDPR will be difficult to use in an AI application. This result is at odds with the objective of the AI Regulation and could be resolved through some modest revisions to the GDPR. The second section covers the risk management components of the AI Regulation and whether they effectively address risk to people from processing or making the choice not to process data. The IAF thinks a more effective approach to risk management could be achieved, and that approach has been outlined in the IAF’s model legislation, The Fair and Open Use Act . [1]   This blog, the first of three, is a summary of the IAF comments as part of the EU Commission’s consultation process.  Summary of Comments: Better predictions are the product of probability tools that rest on quality data well curated and responsibly stewarded.  The nomenclature for this process is “thinking with data.”  Artificial Intelligence (“AI”) is the substantial next step in organisations thinking with data.  Thinking with data results in outputs or “insights,” and the nomenclature for that process is “knowledge discovery.”  Knowledge discovery is where AI begins and is the source for better predictions. New knowledge, once created through knowledge discovery, creates the pathway to the desired future of better operations, resource allocation, and environmentally beneficial outcomes.  The nomenclature for this process is “acting with data.”  This application of learning may take place either directly by people or through some degree of automated decision making .  This concept of a two-step approach is built into the AI Regulation in the delineation of providers of AI systems (thinking with data) and users (acting with data).  AI development involves experimenting or exploring with data for the potential of identifying signals or trends which lead to the opportunity to create a model whose outcome can drive decision making.  There is risk to people at both phases, but they may be very different, and arguably the risks relative to the impact to people are significantly less at the knowledge creation stage. However, the identification of risk requires robust assessments of the right types at the right time. The GDPR should make possible technology applications such as AI, and in the IAF’s view, there are unresolved tensions between the AI Regulation and the GDPR. Proposed Changes to the GDPR Knowledge discovery with personal data meets the current GDPR definition of profiling, and at least part of the AI process is knowledge discovery.  Some AI solutions which are likely to make automated decisions based on processing that uses personal data that goes through a model developed using AI would meet the definition of fully automated decision making.  There are solutions for the specific areas of the GDPR that are problematic for AI. Article 5 requires that processing be lawful and be fair and that data use be consistent with the purpose for which it was collected.  AI training data can come from many different sources so assuring that data was collected for a purpose consistent with AI training is difficult.  Instead, knowledge discovery should join scientific research as being recognized as a compatible purpose under Article 5.  Doing so would recognize that research uses of data, whether they meet the scientific test or not, raises fewer risks than the actual application of data to take an action that impacts individuals.  Article 6 requires that data be processed only when there is a lawful basis.  The legal basis that most often would fit is legitimate interest, but legitimate interest only takes into consideration the legitimate interest of the controller and the interest of the data subject.  Training data typically is not about the interest of a single individual but rather the interests of a broader group impacted by the insights that come from the AI processing.  Therefore, scientific research and knowledge discovery should be added to Article 6 as legal bases G and H.  To protect these new legal bases from misuse, appropriate conditions, particularly transparency related to the objectives and safeguards for processing, should be added as well. Article 9, which covers the processing of special categories of personal data, has made the processing of data for scientific research purposes as well as for more general knowledge discovery quite difficult.  While scientific research is a compatible purpose under Article 5, if data has been collected for a different purpose, then the use of that data for scientific research requires consent if the data meets the special category test.  Instead, a pathway for using special categories of data in knowledge discovery and scientific research should be created in Article 9.  Article 35 requires Data Processing Impact Assessments (DPIAs) when data processing poses a “high risk” to individuals.  There is no distinction between an assessment required for high-risk processing and the distinction between appropriate risk controls at the knowledge discovery stage and the application of insights that may have been derived at this stage.  Therefore, Article 35 should be amended to apply more clearly at the application of insights stage (use of data), where there is a much greater risk to individuals, and not at the knowledge discovery stage. Article 22 says that “the data subject shall have the right not to be subject to a decision based solely on automated processing . . . which produces legal effects concerning him or her or similarly affect him or her.”  This language may preclude AI processing before the jurisdiction of the AI Regulation even comes into effect.  Therefore, Article 22 should be revised to describe more clearly profiling and automated decision making.  Profiling is central to the knowledge discovery process.  As such, it should be subject to a DPIA to ascertain that processing is conducted in a legal and fair manner.  Automated decision-making is a separate process. It should be subject to assessments, and the risks from a particular automated decision-making process should be understood and documented. Currently, the GDPR confuses the connections between the two.  Lastly, Recital 4, which is (should be) the heart and soul of the GDPR, states: The processing of personal data should be designed to serve mankind.  The right to data protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights . . . . The intent of this Recital has not been adopted in the application of the GDPR in areas such as the balancing required under a legitimate interest assessment. In the IAF’s view, this is a byproduct of failing to explicitly carry forward the Recital’s intent into a specific Article. This failure could be accomplished through revisions to Article 5 of the GDPR. Risk Management Issues There is a heavy focus on product safety that drives many of the risk management requirements in the AI Regulation and places less importance on the reality that AI is more about the implications of the data collected and produced and less about the product.  Instead, the AI Regulation should be adapted so that it applies to all AI application scenarios that would be risk assessed as opposed to applying only elements of the AI Regulation to a narrow set of defined high-risk scenarios. This change would allow for the tailoring of requirements to those applications that have the potential to create higher risk. Furthermore, in the AI Regulation, risk management tools have been narrowed to conformity assessments and applied only to a small subset of high-risk applications.  A conformity assessment focused on product safety does not adequately address the data driven benefits and risks.  Moreover, conformity assessments are not consistent with the trend toward broader AI impact assessments or algorithmic impact assessments (AIAs).  In academic work, public sector developments and proposed legislation in other jurisdictions, the use of AIAs is suggested or proposed. For example, in Australia, Canada, Singapore and shortly Hong Kong, governments have or will be implementing processes and requirements for their public sectors to address ethical and other risk challenges of AI. These guidelines often include AIAs. Even before laws and regulations will drive the need for expansive impact assessments, public sector procurement rules will have an earlier impact. Therefore, the IAF suggests: Assessing the likely risk of an AI application should be required.  Doing so would allow for the extension of good data governance requirements to all AI applications. While many requirements included in the AI Regulation arguably should be applied to all areas of AI development relative to data, such assessments allow for a tailoring of requirements based on risk. Broad based AIAs should be a mainstay requirement for all AI applications and should include the assessment of risk. Conformity assessment requirements as currently outlined in the AI Regulation could be incorporated into an AIA as appropriate as could aspects of a DPIA as required by Article 35 of the GDPR. Conclusion The GDPR needs to be revised to accommodate knowledge discovery, so the full range of fundamental rights are protected. Axel Voss’s white paper, “Fixing the GDPR:  Towards Version 2.0 ,” outlines some of the structural issues in the GDPR related to knowledge discovery.  The AI Regulation does include an exemption for special categories of data to measure for bias, so exemptions to the GDPR through the AI regulation are possible but not the optimal approach.  The GDPR is not fine-tuned for AI, and the AI Regulation only covers “high risk AI.”  Instead, the AI Regulation relies on conformity assessments that are dependent on established standards.  Work in trustworthy AI suggests that all AI that touches on people should undergo AIAs.  It is only when an AIA and associated risk analysis is conducted that a full sense of the risks associated with data processing through AI can be identified and mitigated. [1] The IAF developed a model legislative approach primarily as a communications tool as it is easier to engage with policy makers in the U.S. in “legislative” language and format. While the model legislation is drafted with a U.S. policy maker audience in mind, it is illustrative and parts and/or themes could be adopted in any other global legislative format

  • My Head Spins – UK Gets Adequacy As Task Force Says GDPR Must Change

    The EU Commission today (June 28, 2021) announced that the UK has been granted adequacy for data flows from Europe to the UK.  The UK received adequacy, in part, because the UK law is substantially the same as the EU GDPR.  Just last week the UK government published the final report from the Task Force on Innovation, Growth and Regulatory Reform, The TIGRR Report . It is the interplay between the adequacy finding and the TIGRR Report’s proposals that make my head spin. The TIGRR Report proposes “Replacing [the UK] GDPR with a New UK Framework for Data Protection.”  The TIGRR Report describes this proposal more fully: Replace the UK GDPR 2018 with a new, more proportionate, UK Framework of Citizen Data Rights to give people more control over their data while allowing data to flow more freely and drive growth across health care, public services, and the digital economy. The proposal goes on to say in part: The UK has the opportunity to cement its position as a world leader in data, through a combination of proportionate, targeted reforms that boost innovation, and by maintaining its enthusiasm for digital. The Government should use an approach to data based more in common law, so case law can adapt to new and evolving technologies such as artificial intelligence and blockchain. . . . .  Reforming GDPR could accelerate growth in the digital economy, and improve productivity and people’s lives by freeing them up from onerous compliance requirements. In a survey by DataGrail 49% of business decision makers reported spending over 10 working days a year just to sustain GDPR compliance, with 12% spending over 30 working days a year. A more proportionate approach would free up many businesses to provide more value to the consumers and other businesses they serve. The TIGRR Report also proposes encouraging AI by removing Article 22 of the GDPR that covers automated decision making and profiling. So, my head spins not because the proposals have been made, but instead because of its timing right before the EU announced UK adequacy.  It is hard to see how the UK would stay adequate with the changes suggested by the TIGRR Report. The TIGRR Report task force is not alone in finding issues with the GDPR as it relates to advanced analytics and AI.  Axel Voss, shadow rapporteur for both data protection and AI in the European Parliament, has published a white paper entitled “ Fixing the GDPR:  Towards Version 2.0 .”

  • 50 Year Heritage of the FAIR and OPEN USE ACT

    Permissible uses for personal data, a key component of the FAIR and OPEN USE ACT , were invented in the United States 50 years ago.  The FAIR and OPEN USE ACT is the IAF’s forward looking risk-based model legislation.  Congress’s enactment of the Fair Credit Reporting Act (FCRA) created the infrastructure for the modern consumer economy.  It took consumer reporting from paper files to the digital systems that made consumer credit widely available.  Credit markets were local in 1970, and within a generation they were national.  The FCRA also enhanced consumer equity by making possible the 1974 Equal Credit Opportunity Act (ECOA).  Fair decision making requires complete data for the specific purpose of making decisions when individuals apply for credit, insurance, employment, and other critical consumer benefits.  The FCRA mandates that key actors, credit bureaus, reporters of data, and users of consumer reports use that data only for permissible purposes, be transparent, and provide consumer protections such as the right to access and to dispute data.  The central point is that consumer reports are powerful and therefore should only be used for legitimate purposes when substantive decisions are being made related to consumer requests and benefits.  The processing must be fair, particularly because credit reporting is mandatory with all credit active individuals in the credit reporting systems.  The law does not require just permissible purposes; it also requires transparency and responsible processing by all parties. There are rules that cover practices by the credit reporting agencies, the parties that report data, and those that use the consumer reports to make decisions.  There is a duty of care to obtain the maximum possible accuracy.  There are specific consumer rights to know there are credit reporting agencies, to access the data, and to dispute data thought to be inaccurate.  The FCRA is fair processing legislation. Legacy privacy legislation, however, is focused on procedures so individuals can control when their data are collected.  The latest privacy legislation, like the European General Data Protection Regulation (GDPR), is built, first and foremost, on the premise that processing personal data should be lawful and fair.  This means the purposes for which personal data are processed must be specific, and the GDPR specifies that personal data must be processed under one of six lawful bases.  Specific purpose and lawful basis have always been important, but the emphasis on a “risk based” system has shifted the attention.  Fairness becomes ever more important, while the focus on individual control still is less emphasized.  Fair processing is not new; the FCRA was enacted two generations ago.    With today’s complex data ecosystems, it is time to place the onus on the organization to first and foremost achieve fair processing rather than placing the burden on the consumer.  The IAF model legislation, the FAIR and OPEN USE ACT , does so by building on the FCRA’s concept of permissible purpose.  The FCRA requires that personal data only be processed for specific legitimate uses.  The IAF model legislation contains eleven legitimate uses: Compliance with a legal obligation; Information security: Routine business processes; Requested product or service; Protection against unlawful activity; Public safety and health; Affirmative express consent; Knowledge discovery; Research; Advertising or marketing purposes (subject to conditions); and Journalism. The IAF model legislation has more purposes than the GDPR because the IAF has learned from the European experience.  For example, knowledge creation, that powers innovation, is a specific legitimate use.  The FAIR and OPEN USE ACT compliments the legitimate uses with requirements that covered entities operate in a responsible and answerable fashion.  Article IV of the IAF model legislation fully describes the responsibilities of an accountable organization, and Article V spells out the specifics for managing the risk to other stakeholders.  Individual rights are described in Article III.  So, while the FAIR and OPEN USE ACT may seem to break new ground in fair processing legislation, it really is built on key concepts first enacted by Congress in 1970.  Many of the provisions which have been expanded for today’s observational world have their roots in legislation that successfully facilitated the consumer revolution of the latter part of the 20th century.  Today, a similar legal infrastructure for complex digital ecosystems is needed.  Please read the FAIR AND OPEN USE ACT .  An earlier blog describes how the IAF model provides a new paradigm for privacy legislation.

  • Time To Break the Privacy Legislative Paradigm – IAF Model Legislation

    Federal privacy legislation in the United States is stuck.  There are many reasons for this, but the fact is that the old privacy paradigm that individual control is the keystone for effective fair processing is no longer fit for its purpose.  Yet, the old paradigm is the starting point for most privacy legislation.  Georgetown law professor Julie E. Cohen captured this dilemma in her recent article, “How (Not) to Write a Privacy Law.”  Individual control has strong emotional pull.  The concept that individuals can control who has their information and how it can be used is compelling.  However, the fact is that this is an observational age where individuals’ information can be obtained and used without them knowing about it, and the data obtained through that observation drives advanced analytics, including artificial intelligence (AI), which, in turn, drives today’s digital society and economy. There is a different privacy paradigm.  It is one where the keystones are responsible and answerable behavior by companies processing data pertaining to individuals and where this behavior is overseen by strong regulatory authorities.  The one word for responsible and answerable is accountability.  There is one legislative model where the keystone is accountability and that is the model legislation of the Information Accountability Foundation (IAF), the FAIR AND OPEN USE ACT . This model legislation is based on all the IAF has learned over the past three years.  State legislation has been enacted in California and Virginia, much has been learned due to greater experience with the GDPR, and proposed federal bills contain unique features.  The IAF model legislation references where it relies on these sources. While other bills claim to be risk based, they fail to define the risk that is to be prevented or managed.  The IAF model legislation is clear that the risk to be managed is adverse processing and provides guidance on how to determine processing is adverse. Accountability and robust oversight pave the way for flexible innovation. The IAF model legislation has its roots in accountability’s essential elements , and the preamble to the IAF model legislation has three accountability principles that are color coded: Accountable and Measured Informing and Empowering Competency, Integrity and Enforcement While the legislative keystone is accountability, the IAF model legislation still requires full transparency and individual control where individual control is effective. The IAF’s mission is research and education.  The IAF refers to the FAIR and OPEN USE ACT as model legislation.  It is the IAF’s desire that its model legislation be debated and hopefully that parts of it be used in enacted legislation.  Over the next few months, the IAF will look for opportunities to introduce the elements of this model legislation to the privacy community.  The IAF also will publish blogs on specific features in the FAIR and OPEN USE ACT.  Marc Groman, former White House Senior Advisor for Privacy and the first CPO of the FTC, is the lead author of the model legislation.  Marty Abrams is the IAF chief strategist.  Barb Lawler, IAF COO, brings two decades of experience in leading CPO offices.  The three of them are ready and willing to engage in a dialog on the FAIR and OPEN USE ACT.

  • AI and the Road to Expansive Impact Assessments

    Assessments related to the complex use of data pertaining to people are about to expand significantly requiring new skills by both organisations and regulators. Today, in many regions of the world, Privacy Impact Assessments (PIAs) are either explicitly required or implicitly required to achieve other regulatory related demands, for example, processing documentation. However, these impact assessments are more about achieving compliance and, as such, are more of an assessment of risk to the organization. In short, they often are not really about the impact to individuals as a result of complex data processing. In Europe, the GDPR has requirements for certain higher risk data scenarios where a Data Protection Impact Assessment (DPIA) is required. This assessment is intended to look at a broader range of interests and risks. However, as reported by regulators, DPIAs have not been implemented as intended in many organisations. Organisations will soon be conducting impact assessments that are much more expansive and rigorous than are required by today’s data protection and privacy laws. The driver of this trend will be the risks associated with Artificial Intelligence (AI). This approach may be driven by organisations seeking to manage new, more complex risks associated with the use of AI but also by requirements relating to government procurement. However, it eventually will be mandated by new laws governing the use of AI and the associated fair implications to people. A prime example of this type of legal trajectory is the recently released, EU Proposal for a Regulation laying down harmonized rules on artificial intelligence (Proposed Artificial Intelligence Regulation) with its myriad of requirements on organisations. In the U.S., standalone AI related legislation and the requirement for more comprehensive assessments is part of some proposed Federal legislation and State enacted privacy legislation. There are a number of leadership organisations that have already started down the path of more expansive assessments that were profiled in the IAF report on Demonstrable Accountability – Why It Matters . As AI starts to affect almost every aspect of society, addressing its risks will require an end-to-end comprehensive, programmatic, repeatable demonstratable governance system for adoption by all organizations seeking to use these complex systems. These governance systems will include broad, risk-based decision-making assessments. While laws are likely to evolve in this space, the demands related to addressing AI risks require a “trust” driven approach rather than just a “legal” compliance approach to accountability. This approach evolves accountability based on legal requirements to accountability based on “fair processing” of data. In short, trust in more complex, more opaque systems that have impact on individuals will require more fair processing obligations on the organisations using these systems. Since the release of the IAF’s report, further study has illuminated the key role impact assessments play in a demonstrable governance model. To understand this issue more completely, the IAF took a deeper look at some of the organizations studied in the first report. However, this study quickly evolved into the role the adoption of AI and the associated advanced governance model was going to play. By extension, the drivers of “responsible” AI indicate that the push for a different and more expansive governance model is likely to come from the AI community and not the data protection community. Drivers include organizations following the example of today’s leaders and increasingly will include mitigating reticence risk as organizations seek to make risk-based decisions related to the adoption of technologies involving AI. The implications are clear. These new requirements will require new skills, roles, and capabilities in organizations. They also mean new skills and resources will be required by regulators, especially as laws and associated enforcement models get developed. To enable trust-based AI that is demonstrably fair, organizations and regulators will need to evolve. The IAF’s new report, The Road to Expansive Impact Assessments addresses the role of expansive impact assessments. It first takes a deeper look at some of the original organizations studied as well as a view of demonstrable accountability by data protection regulators. It then goes on to outline the role assessments play and to analyze the drivers of more complex assessments such as Algorithmic or AI impact assessments (AIA) and then concludes with a deeper look at the role of AI and its associated governance needs as a driver of this trend. The appropriate structure and mix of regulation and enforcement will require considerably more thought and further work. However, for all the trends noted in this report , organizations should be developing more robust governance systems, including more expansive impact assessments, both to enable their business strategies and to inoculate themselves from the likely direction of regulatory requirements

  • A Trusted Digital Ecosystem Requires Glass Breaking Legislative Change – IAF Summit Recap

    The observational age that drives change in all avenues of human life requires data governance marked by an orderly transition to demonstrable accountability.  Provable responsibility means new ways of thinking about fair processing legislation that is driven by clear objectives and investments in the regulatory infrastructure.  Accountable organizations during this transitional period will need demonstrable operational efficiencies as organizations wrestle with thorny issues like disease free human spaces during a global pandemic.  These preceding three sentences summarize the IAF annual summit held virtually over three days in April.     As the title of the summit, “Mapping a Pathway for Functional Accountability Through the Data Protection Chaos,” indicates, the summit looked at three different but related issues each of its three days:  The first day took a look at demonstrable accountability from the perspective of both organizations building demonstrable structures and regulators demanding clear evidence of those structures and risk processes.  Since both organizations and regulators frequently lack the resources to meet these goals, two panels, one consisting of companies and the other made up of regulators, responded.  Representatives of leading companies discussed how their companies use a trust model or code in the operation of their businesses that also helps protect the privacy of their customers’ personal information. The regulators discussed how they look at evidence of demonstrable accountability which can include privacy impact assessments and data protection impact assessments, audits, and compliance with codes of conduct and certification schemes.         Day two focused on whether information policy legislation should be based on incremental or transformational changes to existing laws. The first panel discussion began with principles for accountability-based legislation and flowed into the need for clear objectives when designing legislation.  The second panel weighed whether objective measures could be used to make subjective decisions on fair processing program adequacy.  This session illuminated the need to go beyond enforcement and look to oversight as a means of achieving fair practices.  The day finished with a panel discussion on whether the drivers for real changes in privacy approaches could overcome the natural nature of law to revert to legacy models even when those models are out of date. Day three focused on operational efficiency to smooth the way to accountability implementation.  It began with a prediction by the President and CEO of the IAPP that the privacy field needs 500,000 new privacy professionals because new privacy laws have created a growing demand.  This prediction is consistent with day one’s discussion about skill gaps at both companies and regulatory agencies.  Next a panel discussed how internal and external audit can provide demonstrable proof points and made the case that sound controls are necessary for performance reviews.  The next panel focused on efficiency in vendor oversight and reviews, a pressing issue for organizations with thousands of vendors. The last panel used vaccine passports as an example of demonstrable accountability to illustrate the issues that need to be addressed and how to address them (e.g., don’t confuse travel and privacy issues, the necessity for accuracy of identity when using health data, equity issues).    What linked the days together was the complexity of data protection in an observational age where simple responses aren’t up to the trust and operational challenges in what has traditionally been called privacy and data protection.  A complex observational world that drives predictive solutions requires equally complex controls.  This complexity in turn requires investments in change.  Additional qualified resources are needed to make an accountability-based system work, hard operational choices will need to be made when new requirements overwhelm the old ones, and transformational legislation is necessary even though policy makers only want to discuss enhancements to individual control.  Where will these discussions take the IAF research agenda?  Future policy calls and research projects related to what the IAF learned at the summit will be announced in future blogs.

  • Enforcement: To Be Effective, One Needs to be Selective

    Richard Thomas, when he was Information Commissioner of the United Kingdom, said: “To be effective as a regulator, one needs to be selective.”  I suggest that regulatory clarity also is helpful. The IAF has been working on model fair information legislation for the past three years.  Many difficult issues are raised by this effort, but a particularly difficult issue has been enforcement and active oversight, and this issue has not perplexed only the IAF.  Enforcement, and its effectiveness, has been an issue in all of the states considering legislation.  Who enforces the law, what is it exactly that will be enforced, and will the resources be available to protect the public through enforcement?  Additionally, if enforcement resources are limited, should enforcement be through private rights of action? Complaints about enforcement exist across the globe.  Enforcement in Europe is too slow, and European equivalents to class actions are pending.  Regulators are being flooded with individual complaints, and in the EU, regulators must investigate every individual complaint.  There is active debate in Brazil regarding how the new data protection law, LGPD, will be enforced and who will enforce it.  Also, in Canada, the powers of the Federal Privacy Commissioner are being revisited, including whether there should be a new oversight tribunal. Yet, there is one voice of clarity.  The Singapore Personal Data Protection Commission (PDPC) has issued the Guide on Active Enforcement (Guide).  The Guide, in 32 pages with very large type, lots of white space, and informative graphics, makes clear the PDPC’s enforcement objectives, how those objectives will be put into effect, the PDPC’s expected timelines, and the role of monetary penalties.  The Guide follows three years where the PDPC built out a maturity model for accountability and advised the government on how to amend Singapore’s existing data protection law for greater flexibility dependent on mature accountability. Singapore is a small country with a unique political, administrative and legal culture.  It enforces data protection as a consumer interest rather than as a fundamental right.  The Personal Data Protection Act (PDPA) is based on the OECD Guidelines and is modeled after the Canadian private sector privacy law.  Lastly, the PDPC is aligned with the government agency that encourages digital growth.  Significantly, the Guide’s clarity of direction and clear communication on objectives and methods is transferable to other locations. The Guide begins with three key objectives: To respond effectively to breaches of the PDPA where the focus is on those that adversely affect large groups of individuals and where the data involved are likely to cause significant harm to the affected individuals. To be proportionate and consistent in the application of enforcement action on organisations that are found in breach of the PDPA; where penalties imposed serve as an effective deterrent to those that risk non-compliance with the PDPA; and To ensure that organisations that are found in breach take proper steps to correct gaps in the protection and handling of personal data in their possession and/or control. A major sticking point on enforcement is when to resolve differences between the organization and individuals and when to enforce.  The PDPC makes clear that its preference is mediation and facilitation rather than investigations, but that enforcement is key: “Notwithstanding, the PDPC will not hesitate to send a clear message of wrongdoing where necessary.”   The Guide goes on to describe the investigatory process clearly.  It discuses when voluntary undertakings might replace a full investigation, and what happens if the PDPC is disappointed with voluntary undertakings.  Where monetary penalties are warranted, it describes how fine levels will be determined. As stated earlier, the Guide should be seen in the context of the PDPC’s overall strategy.  Singapore has an accountability maturity model, and the PDPC has used regulatory sandboxes, has guided the government in creating exemptions for consent, and has recognized a certification program. California is in the early stages of creating a new privacy agency, Virginia has given new responsibilities to its Attorney General, and Washington state may pass legislation in the next few weeks.  All of these states could learn from the systematic approach in Singapore. Private rights of action, agency structure and powers, and the right to cure are being debated in Washington, DC.  The IAF has spent a great deal of time deliberating how the FTC would transition from a pure enforcement agency to one equipped to conduct oversight.  In the end, it comes down to confidence that the agency will conduct its role in a clear and transparent way. So, the UK and Richard Thomas taught that to be effective, it is necessary to be selective.  More recently, Singapore teaches the importance of an overall strategy with a clear definition of the enforcement role.  The IAF will explore the topics of matching resources and capabilities to challenges on Summit Days 1 and 3 on April 13 and 15 and oversight and enforcement on Day 2 of its Summit on April 14.  All these topics will be explored in future policy calls.  Contact Stephanie Pate at spate@informationaccountability.org if you would like to attend.

  • Avoid the Well-Worn Groove to Make Privacy Governance Better

    Writing privacy legislation that is fit for its intended purpose is not easy.  Just using the term “privacy legislation” can be counterproductive.  The term “privacy” insinuates the ability to control the assumptions about individuals based on the history of their interaction with others.  The observational age has made effective control of personal data and the insights they beget almost impossible.  Any suggestion that individuals are able to block observation runs into the growing practicality of how things work and the corresponding necessity for observation.  Smart phones have to connect, and smart cars need to stop.  Pandemics need to be traced, and personalized medicine needs to be personalized.  Much modern technology is dependent on observation. Yet observation often begets manipulation, and manipulation has an effect on the individuals they become.  Further, manipulation over many has an effect on the world they live in.  Increasingly, solutions need to be found, or the environment individuals live in will be more and more hostile. Pushing against all evidence that individual control is not fully effective, privacy regulation and privacy legislation tends to slot back into the same old grove that individuals can govern the information market by the choices they make if they just have the opportunity to understand.  Many individuals do not have the time, knowledge base, and focus to fully understand information disclosed with the intent to be as transparent as possible.  As irrational as it may seem to return to the same groove, such an approach is reflected in pending legislation in Canada, newly enacted legislation in Virginia and proposed legislation in the United States.  Furthermore, it is amplified in guidance from the European Data Protection Board.  If all else fails, fall back on the same old explanations and make the same mistakes again. An excellent  description of this conundrum is the newly published paper by Professor Julie E. Cohen,   “How (Not) to Write a Privacy Law” , published by the Knight First Amendment Institute at Columbia University.  Professor Cohen, in her introductory section, defines the problem: “Current approaches to crafting privacy legislation are heavily influenced by the antiquated private law ideal of bottom-up governance via assertion of individual rights, and that approach, in turn, systematically undermines prospects for effective governance of networked processes that operate at scale. . . .  Effective privacy governance requires a model organized around problems of design, networked flow, and scale.”  Professor Cohen goes on to analyze most of the proposed federal privacy bills and finds that they settle back into the same groove first cut by Alan Westin back in 1967.  If there truly is an interest in understanding the problem before discussing solutions, Cohen’s paper is mandatory reading.  The IAF’s mission is not enacting legislation, but its mission is research that begins to spell out what outside the box legislation might look like.  So, the IAF has pursued accountability based fair processing legislation.  The more it has ventured into this endeavor, the more complicated the puzzle has become.  Accountability based fair processing legislation would require not only a retrofit for corporate processes, but it also would require a reset of the expectations of those that oversee the fair governance of data.  To get there, the IAF has returned to the basic principles that would drive the process: PRINCIPLES FOR FAIR PROCESSING ACCOUNTABILITY Risk-Centered A risk-centered approach to fair data processing, necessary to achieve shared goals for beneficial innovation, trust and fairness, bases decisions on the likelihood and the severity of harm and the degree of benefit to people, groups of people, society and organizations if data are processed or not processed. Accountable and Measured Such a risk-based approach requires organizations be accountable, with accountability defined as organizations being responsible for how data are used and being answerable to others for the means taken to be responsible.  While organizations have primary responsibility for fair processing, individuals still have control where uses are impactful and individual controls are effective.  A decision is not risk-based unless there is a measurement of the risks and benefits at issue and the integrity of the assessment is demonstrable to others.  Risk/benefit decisions are not always intuitive.  They require assessments that identify the parties that might be impacted by the use of data, how they might be impacted, and whether the risks and benefits are mapped to the people, groups of people and society.  The matching of risks to benefits might not be one-to-one, but discrepancies must be understood and reasonable.    Decisions must be explainable to others based on objective measures.  While loss of individual autonomy is a risk factor, risks and enhancements to other fundamental human interests like health, employment, education and the ability to conduct a business must also be part of an assessment. Informing and Empowering Organizations have a proactive obligation to inform stakeholders about the data processed and the processes used to assess and mitigate risk. While fair data processing is less dependent on individuals’ decisions, where individuals do have rights, they should be transparent and easily exercisable. This relationship between individual rights and fair data processing facilitates organizations being held to account. Competency, Integrity and Enforcement Organizations are evaluated by the competency they demonstrate in reaching decisions to process, their honesty in making decisions that serve stakeholders that are impacted and the alignment of their disclosures and actions.  All organizations will make mistakes, and some of those mistakes will impact people, groups of people or society.  Organizations are responsible for those outcomes, but there is a difference between systematically bad decisions and anomalies. A well-resourced regulatory enforcement mechanism is necessary for a risk-centered, accountability-based governance system to be trusted. The IAF believes these principles can be charted to the language contained in its model legislation.  The IAF believes that such legislation would avoid the well-worn groove that imperils privacy law that should be fit for tomorrow.  The IAF will explore these principles during day 2 of the IAF Summit to be held on April 14.

  • Lesson from the North: The Expectations for Federal Privacy Legislation Will Overwhelm the Process

    By: Lynn Goldstein & Marty Abrams Are the goals of a comprehensive U.S. privacy bill to cure the ills of surveillance capitalism?  Algorithmic discrimination?  Labor displacement through AI?  Data monopolies as a driver of concentration?  Law enforcement through private observation?  Passing comprehensive federal privacy legislation was complex enough when the issues were AdTech and individuals access to their data.  When solving all the ills of the Fourth Industrial Revolution is added, the prospects for comprehensive federal privacy legislation become daunting.  Comprehensive federal privacy legislation will be impeded if the law is expected to cure all the wrongs in a digital age.  In a Biden administration, progressives have more clout in Washington.  Senator Coons from Delaware is chairing a Senate Judiciary Subcommittee on Privacy, and he has authored legislation on algorithmic discrimination.  However, the full range of the information social equity issues has not been articulated yet in Washington.  However, it has been stated clearly in the country North of the U.S. in the debate over new privacy legislation in Canada. An excellent description of the demand for social equity is in an essay in the Canadian “National Journal” by the former CEO of Research in Motion, Jim Balsillie.   The Balsillie  commentary clearly articulates the growing progressive view that privacy legislation should deal with all the warts of the observation age and pushes against the manifest destiny of insights driven by analytics.  Canada’s attempt to enact new federal privacy legislation is Bill C-11, tabled in the House of Commons last December.  C-11 is the government’s attempt to reconcile Canada’s digital ambition with Canadian’s long held view that people should control their data.  Balsillie does not believe C-11 will solve the ills of an observational age: In the early 21st century, every industry became a technology industry, and now just about every internet-enabled device, and online service, is a supply-chain interface for the unobstructed flow of behavioural data that’s used to power the surveillance economy.  This has not only meant the death of privacy, but has served to undermine personal autonomy, free markets and democracy. Today’s technologies get their power through their control of data.  Data gives technology an unprecedented ability to influence individual behaviour.  The economic incentives of companies in the surveillance economy differ so sharply from those of traditional businesses that new data governance rules are needed to contain them and prevent abuses. . . . .  This is why the data-driven economy has, in less than 10 years, created the greatest market and  wealth concentrations  in economic history,  reduced the rate  of entrepreneurship, innovation and business dynamism and lowered wages . . . .   Such monopolistic companies are not contributing to the productive capacity of the market, but instead setting the market standard to which all other firms must conform to in order to survive. Similar views have been penned in the U.S.  The term “surveillance capitalism” was coined by Shoshana Zuboff, a professor at Harvard University.  But these views have not coalesced into an articulation of privacy legislation.  Instead, privacy legislation has centered on greater consumer control, more detailed record keeping and transparency, and risk assessments but no clear direction on the risks to be weighed.  This direction will change.  Hearings have begun already on the negative effects of an information economy.  At some point the concerns for social equity in an information economy will be linked explicitly with privacy legislation, and the expectations for that legislation will expand beyond reasonable boundaries. Balsillie does not set forth what he thinks should be in privacy legislation.  He does describe the shortcomings of a consent-based regime where the power lies with the party requesting consent.  Instead, he does paint broad expectations on what he thinks is needed and why: Canadians need a new legal framework that outlaws the collection of behavioural data on massive scales and the algorithms that micro-target and manipulate human behaviour. . . . . A revised bill C-11 should address the corrosive imbalances in the relationship between individuals and data intermediaries.  . . . . If C-11 becomes law in its current form, markets will become more concentrated, our social challenges will become more pronounced and our democracy will continue to be insidiously subverted by technologies that facilitate surveillance and manipulation . . . . No privacy bill, or even fair processing legislation such as has been suggested by the IAF, will solve every ill of an information age:  no place to be invisible, imbalances of power, dated competition law, hidden discrimination.  Comprehensive federal privacy legislation will solve many problems, but it will not cure all the ills of Fourth Industrial Revolution.  However, there are growing expectations that it should do just that.  Those views will be articulated in Washington just as they have been in Ottawa.  That articulation will make passing comprehensive federal privacy legislation even more difficult

  • Barb Lawler Joins IAF as COO

    The Information Accountability Foundation ( IAF ) is pleased to announce that it is adding to its leadership.  Barb Lawler is joining the IAF as Chief Operating Officer and Senior Policy Strategist effective April 1, 2021.  Barb brings over two decades of business privacy implementation experience to the research and policy non-profit foundation.  First, she will bring a management and operational focus to IAF projects, and second, she will drive accountability implementation-directed research.  Barb is already leading the IAF’s Accountability Operations Discussion Group.  Most importantly, Barb will partner with Executive Director Marty Abrams to lead the IAF through the uncertain change driven by increased observation, ever expanding analytics, and public policy efforts to legislate and regulate in confusing times. “This partnership with Barb truly expands IAF capabilities and service to our entire community,” said Marty Abrams.  “Her grounding in the operations side of modern privacy management completes the IAF research approach.” “I’m excited about the opportunity to help lead the IAF in its mission to promote responsible corporate data stewardship that is respectful of the fundamental right to fair processing,” noted Barb. “This is a critical juncture for data policy leadership in a time of conflicting crosswinds, and the IAF is poised to expand its impact in the U.S., Canada, Europe and beyond.”  “I am thrilled that Barb is joining the IAF,” said Scott Taylor, IAF Board Chairman.  “Barb is deeply respected by all stakeholders and brings a proven depth of experience and leadership.  Her practical perspective as a thought leader and practitioner will benefit the organization and its members.” Barb Lawler (FIP/CIPM/CIPP-US) is a recognized privacy leader who most recently was the Chief Privacy and Data Ethics Officer of Looker Data Sciences, which was acquired by Google Cloud, and previously held roles as the Chief Privacy Officer of Intuit and of Hewlett Packard. She has an extensive track record in shaping the thinking of U.S. and global policymakers on data policy issues and has delivered formal testimony to both the U.S. Senate (twice) and House, IRS, FTC and California State AG. She is a member of the Internet Ethics advisory board to the Santa Clara University Markkula Center for Applied Ethics. Barb, a Bay Area native, is a graduate of San Jose State University and splits her time between Los Gatos and Santa Cruz, CA. The IAF is the preeminent global information policy think tank that creates scholarship and education on the policies and processes necessary to use data wisely in a manner that serves people. The IAF’s goal, through active consultations and research, is to achieve effective information governance systems to facilitate information-driven innovation respectful of the fundamental right to fair processing. The IAF meets this goal by working with regulatory agencies, policymakers, advocacy organizations, and businesses to achieve solutions that meet the full range of fundamental rights. To reach Barb email her at blawler@informationaccountability.org .

  • Schrems II and HR Data

    The balancing of rights and freedoms is one of the great innovations of the EU General Data Protection Regulation (GDPR) but one of the great challenges as well.  The Charter of Fundamental Rights of the European Union recognizes fifty political, social and economic rights for the people of Europe.   Recital 4 of the GDPR states that the right to the protection of personal data should be balanced against other fundamental rights, and the Court of Justice of the European Union (“CJEU”) has stated that national authorities and courts should consider a fuller range of rights. Past IAF publications have suggested processes that balance the numerous risks and benefits to people associated with a decision to process or not process personal data, and this balancing should weigh the risks and benefits against each other based on the likelihood of each of the risks taking place and the magnitude of each particular risk.  The same risk management approach should be used when a decision to transfer or not transfer data to a third-party jurisdiction takes place.  The decision by the CJEU in Schrems II and the EDPB draft Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data (Draft EDPB Guidance) provide an opportunity for the IAF to look at this balancing in detail as it relates to the full range of fundamental rights and freedoms.  Specifically, the IAF looked at the transfer of human resources data from the EU to a third country and the risk to employees if such data were not transferred.  More specifically, the IAF looked at the risk to the right to protection of personal data, Article 8 of the Charter of Fundamental Rights of the European Union , and to the freedom to choose an occupation and the right to engage in work, Article 15.  The team, Nancy Lowrance and Lynn Goldstein, interviewed a group of companies about their human resource processes and how they would be impacted if transfers from the EU were limited.  They then conducted a policy analysis where numerous individual rights and freedoms were balanced.  That research may be found in “Addressing Human Resources Data Flows in Light of European Data Protection Board Recommendations.”   While the research paper focused on Schrems II and the Draft EDPB Guidance, the same analysis is relevant to other national and draft provincial laws that require adequacy.  This analysis also begins to illuminate that policy discussants need a new vocabulary for framing the balancing of rights and freedoms.  For example, the term proportionality suggests a balancing of two factors against each other.  Issues, such as vaccination passports, will involve numerous fundamental rights and freedoms across many different stakeholders, and the term proportionality will not work for that multi-factor analysis.  A future IAF paper will explore that issue.  Please let us know what you think by email Martin Abrams at mabrams@informationaccountability.org .

  • In 2021, the Policy Currents will Blow Hard from Numerous Directions

    A generation ago privacy was much simpler.  By “a generation ago,” I mean the beginning of this century.  Yes, the early years of the Internet as a consumer medium and a newly enacted EU data protection directive were forces to reckon with, but the complexities of California’s data breach law, smart phones, big data and the EU’s ePrivacy Directive and General Data Protection Regulation (GDPR) didn’t exist yet.  Additionally, the growth in analytical driven data (and technology) use, such as artificial intelligence (AI), hadn’t been experienced.  The IAF’s charge is to consider policy answers about factors that will emerge in the next 18 to 36 months, with the assumption that under 18 months policy will be fairly stable.  That is no longer a fair assumption.  Data breaches like Solar Winds, court cases like Schrems II, decisions like the one in Singapore that contact tracing data might be used for criminal investigations change the calculus.  For the IAF to meet its mandate, it needs to consider all the cross currents that impact policy related to impactful data flows. Stakeholders such as regulators, policymakers, NGOs and companies also must consider these cross currents. Toward that end, the IAF team aggregated all of these cross currents existing at the beginning of 2021.  Several non-IAF colleagues reviewed and contributed to a list of these issues.  These cross currents, which influence the initiatives undertaken by the IAF, include (not necessarily in order of importance): Data insecurity Observation, tracking, context, and control Data as an asset, advanced analytics, and norming Global data transfers Public interests versus individual privacy Organizational operational friction Similar regulatory operational friction Degradation of enforcement and regulatory controls Impact of non-governmental entities Unsettled direction of privacy and data protection law Public trust diminished by misinformation enabled by social networks and other media Emergence of dominant digital players Changing global balance of power This list is set forth in greater detail in an Appendix at the end of this article. When counted, there are thirteen different conflicting currents.  All are important, but thirteen inputs are hard to manage.  So, the staff at the IAF took those thirteen currents and derived five themes that could be used to understand the frictions that get in the way of mapping more productive information policy: 1 .  Accelerating trust deficit.   The deficit is reflective of a broader societal distrust in institutions ranging from decreased trust in aircraft design and vaccination creation to fears that an honest election is impossible.  In data protection, individuals’ mistrust begins with their data not being used to serve their interests, frequent data breaches. For regulators’, it is their impressions that organizations can’t be trusted to be accountable.  Outcomes of this distrust is reflected in regulators expressing a lack of confidence relative to the efficacy of legitimate interest assessments and in court cases like Schrems II.  2.  Dissonance between policy objectives and political rhetoric related to privacy .  Policy makers say they want a digital future where advanced analytics drive economic growth, efficiency and global competitiveness.  Such an ambition requires more effective laws to govern data use pertaining to individuals.  At the same time, reading the distrust of people, policymakers announce privacy reforms that look back to 1980 and not forward to 2030.  This same dissonance is seen in organizations.  While organizations look for legal certainty, they covet a digital future that changes everything.  Digital ambitions require flexibility, and flexibility is inconsistent with certainty.  Flexibility is not inconsistent with sound demonstrable process, but such processes are hard to implement and hard to oversee. 3.  Operational Overdrive .  Data protection authorities are overwhelmed by the volume of complaints they receive, guidance they must draft, audits they must conduct, negotiations in which they must participate and new technologies they must understand.  The magnitude of this work drives them to a state of operational overdrive.  The same result is happening to the data protection offices in organizations.  There is today’s California law and tomorrow’s changes to it.  There is the GDPR and Brazil which are to be followed by China’s privacy law developments.  There are Schrems II supplemental measures and investigations by the FTC.  On one hand, a vision to lessen dissonance should be created; on the other hand, because of flat budgets, what activities will be cut to make room for new ones.  Organizations find these decisions incredibly difficult to make. 4.  Resource Mismatch.   Traditionally, privacy enforcement agencies and privacy officers were staffed by lawyers, investigators and administrators.  Today, they must be staffed by project managers, ethicists, scientists, technologists and operational experts and they still have the same number of lawyers, investigators and administrators.  Today’s data environment requires people who can make judgments in areas that are increasingly grey against a broad cross section of interests, and there is a resource mismatch between what is necessary and what is funded.  Each day that mismatch lingers the friction between the cross currents listed in this article grows.   5.  Conflicting Cultures.   Last, and in many ways the most important, are conflicting cultures.  There are obvious conflicts between Western European concepts of individual sovereignty and Asian concepts of community harmony.  However, there also are the cultural conflicts between civil and common law, independence and collegiality, and rights pluralism and a focus on singular rights.  Organizations have cultural conflicts between compliance and scientific (research) curiosity; such conflicts are to be expected. However, there must be a means for harmony between cultures, and the lack of that harmony spins the conflicting currents into wind shear. These are the IAFs’ view of how cross currents turn into actionable trends.  Your input is important to the IAF team.  Please let me know if you have additions or corrections to this list by emailing me at mabrams@informationaccountability.org .  Appendix – Cross Currents in Detail Data insecurity Cybersecurity Nation-State actors are getting more aggressive Data breach notifications are overwhelming regulators. Observation, tracking, context, and control Observation technologies are becoming increasingly necessary for the implementation of health, public safety, security, service and Internet of Things device solutions. The relationship between observation and AdTech tracking and data use are often convoluted in the policy/advocacy arena creating friction for both observation technologies and marketing. This conundrum is often framed in the following manner:  advertising (and its support) is necessary for competitive markets but is observation (and by extension data) necessary for effective advertising? Use in context is increasingly based on trustworthy organizational decision making but trust in that decision making is in a deepening deficit. The more data origination is observed or inferred the more individuals lose control over that data and the less transparent data use is to all parties. Nevertheless, consumers still expect seamless experiences. Data as an asset, advanced analytics, and norming More organizations recognize that data is an asset that must be utilized aggressively through advanced analytics, including AI, to stay competitive. Impacts of decisions made based on flawed analytics are becoming more visible. AI and Machine Learning (ML) exasperate this challenge. Automated decisions well within risk parameters are impacted by bright-line rules related to fears about profiling. Human reliance on machine generated conclusions which are probability based exists even when there is human involvement. Global data transfers Data localization acceleration has been powered in part by the effect of Schrems II. More jurisdictions, like Quebec, are demanding adequacy with their laws and cultures. Difficulties in bringing the right parties together to create accountability norms for government use of private sector data for national security interests continue to exist. Public interests versus individual privacy Governments increasingly use new technologies to carry out mass surveillance of citizens. Stress between governments and courts over use of private sector data for national security and law enforcement interests is increasing. Stress is caused by private sector opposition to government reliance on their data and by resulting conflicting regulations. New and enduring challenges exist for organizations processing personal information and privacy of those affected caused by the COVID-19 pandemic. Organizational operational friction Privacy offices increasingly are consumed with implementation of new laws, regulations, interpretations, and court cases at a time when greater adoption of data as an asset requires more strategic intervention by them. Internal data governance tends to be siloed, and this isolation impacts accountability. New skills in IT, Data Management, Data Science and Governance will be required and integrated within organizations to support technology driven data application/use governance (e.g., AI). Compliance costs are increasing while budgets for resources, training, and operational priorities are marginalized resulting in potential harm to the organization as well as the individual and in sub-optimization of value creation. Similar regulatory operational friction Regulators must balance mandates to write new guidance, interpretation and responses to legislators while also investigating and enforcing the law. Appropriations grow at a very slow pace, and resources need to be reallocated to understand new technologies and business processes. Degradation of enforcement and regulatory controls Over stressed and under resourced regulatory agencies are grappling with very complex balancing equations and are resorting to bright-line answers.  This is due to lack of: Time to develop strategy and vision Trust in controllers – particularly in complex data environments People resources Harmonization of legal cultures in a global arena Trust in organizational capabilities and accountability Understanding the complexities of the digital landscape, impact on individuals and the intersection of the problem they are seeking to solve Sometimes inflexible legal mandates to respond to every individual complaint limit the resources available for strategic self-initiated investigations. A perceived lack of discretion to interpret context against the full range of interests, not just autonomy (including beneficial interest) exists. Lack of meaningful enforcement to date causes frustration for some stakeholders that have invested heavily in compliance based on the threat of enforcement. Uncertainty about the role for global or regional organizations, such as OECD, COE, APEC and GPA exists. Impact of non-governmental entities NGO direct action has impacted the ballot initiatives in California. Direct NGO lawsuits are increasing in Europe and potentially in other jurisdictions. Unsettled direction of privacy and data protection law European and Californian models are evolving. Canadian draft is muddled. Singapore is culturally specific. U.S. federal privacy legislation isn’t innovative and is slow paced. Impact of China’s regulatory path and approach is uncertain. There are multiple non-data protection driven laws in areas such as AI. Increasing intersection of competition law, telecom rules, data and system. security, and content moderation cause inconsistent application of the law. Regulatory guidance increasingly acts as a proxy for law. Growing frustration by all stakeholders with regulatory structures that increasingly disappoint are raising questions about data protection effectiveness. Public trust diminished by misinformation enabled by social networks and other media In government In business Emergence of dominant digital players During the 25 years of the observation age, a number of dominant players have emerged in different markets and in different geographies. Late industrial age competition law has been found wanting in achieving fair markets in an observational age. Focus is on dominant technology providers and their power. These developments are resulting in a redefinition of competition law. There is a debate among regulators and policy makers whether to stick to the current hands-off antitrust approach or whether it is necessary to move to a more interventionist approach which takes account of the value of personal data. Changing global balance of power China’s assertion of its global role even in data protection is based on a very different vision of the sovereignty of the individual versus sovereignty of the polity. U.S. leadership is declining. Questions about Europe as the norm maker for data use increasingly are being asked.

  • Dynamic Data Security Should Be the Policy Default: Dynamic Data Obscurity Revisited

    The IAF used the phrase “Dynamic Data Obscurity” in 2015 when I organized a Washington dialog and a Brussels session.   With Schrems II and draft legislation in Canada, it is time to bring the term back.  Below is an update of my 2015 blog.  This blog appeared first in various IAPP publications on January 8, 2021. In 2030, I will be 80 years old and very dependent on data driven technologies.  In ten years, I will not own a car and instead will share a vehicle with others and will have the vehicle available when I need to go somewhere.  The next generation of vehicles will be fed by data in the cloud that is based on observation of me and millions of other travelers.  The data will be used for a full range of activities from improving the design for future vehicles to billing for services.  The car will take me to a clinic visit where the physician already will know my 180-day running history based on the interaction between my various imbedded medical devices.  All of that history will be generated passively based on consents long ago forgotten.  The data will be shared between the many specialists charged with my health.  When I arrive home, my house will recognize me, and the presets will customize my return experience (e.g., they will read my mood to decide if I want a decaf coffee or a single malt.) This is my future, and as you know, the technology to accomplish it is already here.  For example, when I get into my wife’s car on the drivers’ side, the car senses it is me and sets my driving preferences.  If I had a defibrillator in my chest, it would report events to my physician.  So, the leap to a more observational future really is easy to predict.  What still is not easy to predict is how that observational data will be governed to serve me and my community in a thoughtful and fair way.  To the public observation is tracking and tracking makes them nervous, even as they have become dependent on things, like cars and medical devices, that are observation dependent.  I believe one of the keys will be an evolution of the concept of dynamic data obscurity, where data is governed down to element level.  In many ways, the foundation for the concept may be found in the increased definitional requirements for pseudonymization found in the GDPR.  Data are not good or bad; it is when and how they are used that defines benevolence and maliciousness.  Data receive context from other data.  Data must be linked for that context to exist.  There are enormous risks to people and society when data are linked for the wrong reasons and by the wrong persons.  Based on those risks, the default position is that data in transit and data at rest should be obscured robustly.  For that reason, encryption is very common.  Many applications also may be run with data still in an obscured fashion.  De-identification was in wide use back in the 1980’s.   Most analytics can be run with data links obscured.  However, for accuracy purposes, data links must be used to match the data together before they are processed.  The rules around when to obscure links and when not to must be robust and must be enforced.  Technical measures must be used to protect those linking processes.  But, in the end, no single, bright line rule can exist that covers all the human interests at issue when data come together.  It is policy processes that, in the end, carry that burden. That is why the word “dynamic” is used in the phrase “dynamic data obscurity.” Today there still is policy confusion about the most basic of processing terms.  There even is further confusion about processing objectives and which rules should be applied to which processes.  However, the basics are:  Data that link to people should be governed by dynamic rules sets.  Data links should be governed for the brief time they are necessary to achieve a legitimate objective, unless demonstrable assessments determine that the data links should be used.  Again, that is why the term “dynamic” is used.   Other terms will have to be defined and defined in a global universal manner.  There has been a great deal of work done by think tanks and academics to define the terms de-identified, anonymous, and pseudonymous.  However, there needs to be political definitions for those terms that are broadly accepted.  So, after five years, and before another law is enacted, let’s return to the concept of dynamic data obscurity.  Let’s determine before 2030 is here which terms still need to be defined for global use and what rules still need to be set in place.

  • The Movement Towards Demonstrable Accountability – Why It Matters

    First published in IAPP Daily Dashboard There is “accountability tension” going on which has the potential to stifle technology and data driven innovation. With Artificial Intelligence (AI), including advanced analytics and big data, beginning to affect almost every aspect of society, harnessing its potential for good while addressing its risks will require an end-to-end comprehensive, programmatic, repeatable demonstratable governance system for adoption by all organizations seeking to use complex data analytical processing such as AI as part of their strategy and objectives.  At the same time, there is an accelerating trust gap between some regulators and general business practices. Regulators have expressed surprise at how unprepared organizations are to meet even legal compliance driven requirements. The Information Accountability Foundation (IAF) has been leading the accountability movement for over a decade, first with the Global Accountability Dialogue and then in 2018 with the release of Ethical Data Stewardship Accountability elements. Accountability is a basic tenet of 21st century data protection law and governance.  It is referenced explicitly in the European Union General Data Protection Regulation (GDPR), Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), and the APEC Privacy Framework.  It was touted as a major change from the EU Data Protection Directive to the GDPR. This trend to focus more on accountability has continued with the recent release of draft guidance to support the new Singaporean Data Protection Law and draft legislation introduced in Canada, Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act , as an update to PIPEDA. The IAF, as its name suggests, has been a major advocate for building out measures so organizations can manage and use data in a responsible and answerable manner.  To be trusted, accountability has always needed to be demonstrable.  The IAF, which studies policies and practices necessary to use data wisely in a manner that serves people, is finding that organizations need to be explicitly accountable in a more formal manner.  Why?  More complex data analytical processing requires more accountability, and there is increasing evidence that regulators do not trust the current state of accountability. Regulators report they do not find accountability measurable and adequately implemented.  For example, in its 2018-2019 Report to the Parliament of Canada, the Office of the Privacy Commissioner (OPC) criticized the governance component of accountability when it stated: We are increasingly noticing the ways in which it [accountability] has become deficient in today’s world of complex data flows and less than transparent business models.  Our recent investigations into Facebook and Equifax, for example, revealed that accountability as traditionally framed in the law is not strong enough to protect Canadians from the intrusive practices of companies who say they are accountable, but are in fact found not to be. [1] Against this backdrop, there are a number of leadership companies that are going beyond compliance and adopting “trust” driven governance approaches because they view trust as a business-critical goal.  Their goals are to make sure their technology, processes and people are working in concert to maintain the high levels of trust expected by their many stakeholders. In short, they are implementing trust driven, beyond legal compliance, demonstrable accountability processes.  Fair processing will be key to the responsible use of technology and data that is trusted. The IAF has released a report to address the Movement Towards Demonstrable Accountability – Why It Matters . This report: Profiles what these companies are doing to build trust in data innovations through enhanced accountability; and Suggest some considerations for other organizations and for regulatory guidance and future public policy. This combination of the trust gap and a study of what these leadership companies are doing  also suggests a need to evolve the Ethical Data Stewardship Accountability elements, the IAF introduced in 2018 to encompass “fair processing.”  This report also outlines new Fair Processing Demonstrable Accountability Elements that will enable many of the economic benefits the use of advanced analytics that drive technologies like AI can bring to individuals, groups of individuals, society and organizations while meeting the broader needs of these groups relative to  ethical and fair data processing.  The IAF believes these new Fair Processing Demonstrable Accountability Elements advance what the original Essential Elements of Accountability are capable of doing in two key ways: First, when implemented, they facilitate the trust necessary to enable the adoption of data driven technologies like AI and its associated data use. Second, they demonstrate accountability that goes beyond compliance.  The original Essential Elements of Accountability, while key, simply meet compliance objectives regarding existing law. For these organizations, while “trust” is the business objective, an outcome of these leading organizational practices is a demonstrable programmatic approach to “fairness.” This system will be a step up beyond the accountability required for less complex data processing and will require a “trust” driven approach rather than just a “legal” compliance approach to accountability. This result moves accountability based on legal privacy and data protection requirements to accountability based on “fair processing” of data. These leadership companies have basic accountability mechanisms that are easily demonstrable and adequately implemented.  However, these leadership companies have gone well beyond the basics of accountability.  The IAF learned that these companies are implementing demonstrable accountability against a broader objective of trust. These findings match the research in The Ohio State University report, Business Data Ethics: Emerging Trends in the Governance of Advanced Analytics and AI , which concluded that a group of organizations implement ethical approaches that go beyond legal compliance objectives in order to build trust in complex data analytical processing. While ensuring they meet compliance-based requirements, these leadership companies have shifted the focus to fair processing. Regulators could look more to these leadership companies as examples of demonstrable accountability. Organizations who create products and services and make decisions based upon a demonstrable accountability foundation to build trust can earn the ability to use advanced analytical data processing and AI to their full potential. This is why demonstrable accountability matters. [1] Privacy Law Reform: A Pathway to Respecting Rights and Restoring Trust in Government and the Digital Economy, 2018-2019 Annual Report to Parliament on the Privacy Act and the Personal Information Protection and Electronic Documents Act, https://www.priv.gc.ca/en/opc-actions-and-decisions/ar_index/201819/ar_201819/

bottom of page