The following blog was taken directly from the IAF comments filed in response to the California Privacy Protection Agency request for comments on assessments and automated decision–making.
The February 10, 2023, Invitation for Preliminary Comments asks a series of questions related to automated decision-making and profiling. The IAF is not responding to the specific questions but instead setting forth some basics for the discussion. The fact is that automated decision-making is baked into how things work on an everyday basis. For example, the CPPA uses automated decision-making on requests from browsers to access the CPPA’s servers on a daily basis. These decisions have the effect of limiting who can browse the CPPA’s website and file complaints. This is good because the alternative would be constant security breaches. However, the issues related to profiling and automated decision-making predate when consumer browsers made the Internet a consumer medium.
Martin Abrams, former President and current Chief Policy Innovation Officer of the IAF, was the President of the Centre for Information Policy Leadership (CIPL), the Vice President, Information Policy, Experian, Director Consumer Policy, TRW Information Systems and Services and the Community Affairs Officer of the Cleveland Federal Reserve Bank. His background gives him the perspective to provide the following comments.
The consumer Internet accelerated an observational age that in turn accelerated the use of data for probabilistics pertaining to how people behave. The first broad-based probabilistic use of consumer data was probably the Fair Isaac credit risk score in 1989. It was quickly adopted by the consumer lending industry as an aid to better decisioning than was possible with the subjectivity of decisions made purely by lending officers. Soon that aid to people evolved into automated credit decisions. The U.S. Department of Justice (DOJ) investigated whether those decisions had the effect of making decisions on grounds that violated the Equal Credit Opportunity Act (ECOA). Since the data for credit risk scores came directly from credit bureaus, the FCRA required that the use of scores must be disclosed along with the factors that led to the denial. So, from the very beginning, the use of profiling and automated decision-making for substantive decisions were covered by a fair processing law, the FCRA.
In Europe, there was no uniformity in the data available for consumer credit decision-making. As Europe evolved towards the creation of the 1995 EU Privacy Directive, there were debates on whether it was unseemly for decisions on people to be made solely by a machine. Those concepts on what is seemly or not influenced the drafting of Article 22 of the GDPR. So, there are cultural differences between the way that Europe sees these issues and the way they are seen in the United States. The fact is that the relationship between profiling, the use of probabilistics against broad data sets, and automated decision-making is muddled still under Article 22 of the GDPR.
The 21st century saw the rise of analytic skills that allowed for the use of unstructured data into advanced analytic processes. Legacy statistics tested causality, while the growth of big data switched the dominant theme to correlation. This change naturally raised questions about the accuracy of the correlations, whether they were appropriate to apply, and whether they were influenced by the bias built into available data sets. This development has informed the debate about algorithmic fairness. These concerns have accelerated with the growing use of AI, which is the next stage of advanced analytics in our observational world.
So, in thinking about the questions the CPPA is asking, some pragmatic truths need to be addressed:
Profiling is probabilistics built with consumer data. Building choice into the data that feeds the probabilistics has the unintended consequences of skewing the accuracy of predictive values. Choice worked when the relationship was one on one. Most relationships are no longer one on one. Ours is an observational world where there are not many one-on-one relationships. Choice no longer fits and indeed harms the process in an observational world.
Automated decision-making is built into how many modern processes work, including the functioning of the CPPA’s cybersecurity processes. Many automated decision-making processes are subject already to laws such as the FCRA, ECOA, and Fair Housing Act (FHA). The FCRA, ECOA, and FHA wrestled with these issues already and decided that the benefits of the automated decision-making outweighed the risks. Those Acts have methods for determining whether the automated decision-making is biased or not (after the fact testing), and those methods are just as applicable today as they were when they were implemented.
Much of the emotions that pertain to automated decision-making are related directly to whether one thinks it is fairer for a person to make a decision or whether a well-governed algorithm, in the end, would be fairer. As mentioned above, the DOJ in the context of the ECOA decided that a well-governed algorithm was better.
The IAF staff believes this is where the discussion should begin.