top of page

Many privacy professionals are blown away by the pace at which laws are being enacted at the state level in the United States.  Not to be overlooked in this whirlwind of change is the way these new laws change the way business must identify stakeholders and the adverse impacts and interests to those stakeholders.  This development will have impact beyond the United States when other jurisdictions look for ways to cope with data driven technologies such as generative AI.


Legislative activity regarding Privacy and Data Protection continues at a feverish pace. Several U.S. States have passed privacy laws (at last counting, at least 12 State privacy laws have been passed). There have been calls for U.S. Federal privacy legislation and AI regulation and for AI or algorithmic laws at the State level.  Work continues on the AI Act in Europe and on Canada’s Bill C-27. Often overlooked in this frenzy are the Colorado Rules promulgated under the Colorado Privacy Act (CPA) that come into effect July 1, 2023. These new State privacy laws and the Colorado Rules have the potential to significantly impact business requirements and practices and impact the way laws, regulations, and rules are applied. These impacts have significant consequences.


Broad and comprehensive impact assessments are being introduced in eight of the 12 State privacy laws that require the conduct of a Data Protection Assessment (DPA) when processing activities “present a heightened risk of harm to consumers” (i.e., risky processing).[1] Examples of risky processing are:

  • Targeted advertising

  • Selling personal data

  • Profiling that presents a reasonably foreseeable risk of

    • Unfair or deceptive treatment or disparate impact

    • Financial, physical, or reputational injury

    • Physical or other intrusion upon solitude or seclusion offensive to a reasonable person

    • Other substantial injury to consumers

  • Processing sensitive data


Profiling means any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable individual’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.

These laws also require that a DPA identify and weigh the benefits that may flow, directly and indirectly, to the controller, the consumer, other stakeholders, and the public, against the potential risks to the rights of consumers as mitigated by safeguards employed to reduce those risks. The weighing required by these state privacy laws is different from the weighing required by privacy laws anywhere else in the world. For example, the EU General Data Protection Regulation (GDPR) data protection impact assessments (DPIAs) only call for an assessment of the risks to the rights and freedoms of natural persons. The GDPR DPIA is a tool for managing risks to the rights of data subjects and typically is enforced against a narrower set of data protection rights; other fields manage the risks to society and organizations (see EDBP Guidelines, p.17). In effect, the intent of Recital 4 of the GDPR which could have encompassed a broader analysis of data processing impacts to multiple stakeholders and multiple rights and interests, has been ignored. Not so in Colorado.


What makes Colorado significant is the promulgation of its Rules. The Rules go further than the State privacy laws and include 11 risks that must be considered, six of which are in the IAF’s FAIR and OPEN USE ACT, It requires the processor to  identify mitigating measures, specifies the timing of when DPAs should be conducted, and requires a DPA for Profiling if the Profiling presents a reasonably foreseeable risk. The content requirements of the DPA for Profiling, include an explanation of the training data and logic used to create the Profiling system and a plain language description of the outputs secured from the Profiling process, how they are or will be used and how the Profiling system is evaluated for fairness and disparate impact and the results of any such evaluation. Controllers should consider both the type and degree of potential harm to Consumers when determining if Profiling presents a reasonably foreseeable risk of “other substantial injury” to Consumers.” Profiling covered by DPA obligations includesProfiling using Solely Automated Processing, Human Reviewed Automated Processing, and Human Involved Automated Processing.


The actions to be taken and required by the Colorado Rules are demanding; most of them are not mechanical and call for more than a translation of the Colorado Rules into a compliance checklist. To meet the Colorado Rules explicit and implicit requirements, the IAF believes organizations will have to adopt new governance processes and procedures. Required actions call for open ended multi-dimensional weighing, with the explicit expectation multiple internal stakeholders will be involved in the assessment.


The timing requirements of the DPA, and the iterative nature of the way data insights are developed in AI, suggest additional ways a processor will have to integrate a DPA into its business processes. For example, under the Colorado Rules, “The Controller must review and update the DPA as often as appropriate considering the type, amount, and sensitivity of Personal Data Processed and level of risk presented by the Processing, throughout the Processing activity’s lifecycle in order to: 1) monitor for harm caused by the Processing and adjust safeguards accordingly; and 2) ensure that data protection and privacy are considered as the Controller makes new decisions with respect to the Processing.” Thus, the Colorado Rules require at least an annual review and update. By implication, organizations likely will need to use a set of “triggering” questions to determine if a DPA, including for Profiling, will be needed. Practically speaking, ALL projects likely will have to go through an initial risk assessment to determine if a DPA likely will be required. This process reinforces the likelihood of having to design an iterative, multistage assessment more aligned with an AI development lifecycle.


Perhaps the most significant impact of these laws and the Colorado Rules is that the Attorneys General can ask for any DPA, and therefore it is likely the AG’s office also would ask for details on processes and controls associated with key mitigators. As a result, a demonstrable governance program, likely including new elements, will need to be considered, and in some cases (e.g., the Colorado Rules) will be required.


The IAF has been positioning much of its assessment work with these overall requirements in mind. In addition to supplementing the requirements of the Colorado Rules with IAF’s prior work, the IAF has developed an impact analysis which takes into account all of the stakeholders and weighs the benefits/interests against the risks/harms using a 1, 3, 5 scale and shows the results in two different ways: math determined and a visual depiction.  The IAF’s multi-dimensional weighing is unique. To make the Colorado Rules more actionable, we have developed an Assessment Framework that matches explicit and implicit requirements of the State laws  and the Colorado Rules into a model tool that organizations can incorporate into their operational processes.


While the Colorado Rules are specific to that state, as mentioned earlier, seven other state laws require assessments that go beyond the controller and consumer to other stakeholders as well, and California is issuing regulations requiring such assessments.[2]  It is likely the Colorado Rules will influence regulations being issued in the other states.


While the operational impacts of these State laws and the Colorado Rules are large, perhaps the consequential impact is just as far reaching. Specifically, the broadening of the range of harms and stakeholders that could be impacted under the eight state laws go well beyond the way privacy and data protection regulation typically has been approached. For example, in the EU, the GDPR focuses on a narrower set of data protection rights and their impact on an “individual.” While the IAF has long argued that risks and benefits of data processing and most importantly the use of data or insights should be weighed across all stakeholders, the EU regulatory structures are not set up to oversee this. To illustrate this conundrum, under the EU AI Act, the personal data aspects of AI are governed by the GDPR and the regulators of its Member States.  Once the AI Act goes into effect, if a business uses AI driven data insights to restructure its operations resulting in the loss of jobs (one of the overall risks of AI is often told through the lens of job loss or substantial work impact), some may argue that a loss of jobs results is an unfair outcome. In some Member States, the Data Protection Authority is being positioned as the AI Regulator under the AI Act. Is a current Data Protection Authority in the position to opine on the fairness of this business decision. Are current regulators focused on data protection able to address the many varied risks of AI? What sort of competency and capacity do Data Protection Authorities need to develop?


Much has been written about the “ethical” AI requirements business should adopt. The U.S. State laws with assessment requirements and the Colorado Rules bring many of these previously thought of as “nice to haves” as part of a responsible AI Governance program into the real world as requirements. They make the expectations of demonstrable accountability realistic. Finally, they raise major questions and concerns as to oversight and regulatory structures and competence.


But the most important, significant, and consequential impact of the State laws and the Colorado Rules is they will be at first will be picked up by other States (e.g., California), even if only thematically. This development will result in these broad, explicit and implicit, requirements becoming the standard. This standard will impact the way new laws and regulatory approaches are created everywhere.


 

[1] The California Privacy Protection Agency (CPPA), as part of its rulemaking responsibilities, is in the process of issuing regulations requiring businesses, whose processing of consumers’ personal information presents a significant risk to consumers’ privacy or security, to submit to the CPPA on a regular basis a risk assessment “weighing the benefits resulting from the processing to the business, the consumer, other stakeholders, and the public, against the potential risks to the rights of the consumer associated with that processing, with the goal of restricting or prohibiting the processing if the risks to the privacy of the consumer outweigh the benefits resulting from the processing to the consumer, the business, other stakeholders, and the public.”


[2] See footnote 1.

US State Privacy Laws Will Fundamentally Change the Way Businesses Assess Harm

June 14, 2023

Peter Cullen

Articles and News
bottom of page