top of page
Writer's picturePeter Cullen

AI and the Road to Expansive Impact Assessments

Assessments related to the complex use of data pertaining to people are about to expand significantly requiring new skills by both organisations and regulators. Today, in many regions of the world, Privacy Impact Assessments (PIAs) are either explicitly required or implicitly required to achieve other regulatory related demands, for example, processing documentation. However, these impact assessments are more about achieving compliance and, as such, are more of an assessment of risk to the organization. In short, they often are not really about the impact to individuals as a result of complex data processing.


In Europe, the GDPR has requirements for certain higher risk data scenarios where a Data Protection Impact Assessment (DPIA) is required. This assessment is intended to look at a broader range of interests and risks. However, as reported by regulators, DPIAs have not been implemented as intended in many organisations.


Organisations will soon be conducting impact assessments that are much more expansive and rigorous than are required by today’s data protection and privacy laws. The driver of this trend will be the risks associated with Artificial Intelligence (AI). This approach may be driven by organisations seeking to manage new, more complex risks associated with the use of AI but also by requirements relating to government procurement. However, it eventually will be mandated by new laws governing the use of AI and the associated fair implications to people. A prime example of this type of legal trajectory is the recently released, EU Proposal for a Regulation laying down harmonized rules on artificial intelligence (Proposed Artificial Intelligence Regulation) with its myriad of requirements on organisations. In the U.S., standalone AI related legislation and the requirement for more comprehensive assessments is part of some proposed Federal legislation and State enacted privacy legislation.


There are a number of leadership organisations that have already started down the path of more expansive assessments that were profiled in the IAF report on Demonstrable Accountability – Why It Matters. As AI starts to affect almost every aspect of society, addressing its risks will require an end-to-end comprehensive, programmatic, repeatable demonstratable governance system for adoption by all organizations seeking to use these complex systems. These governance systems will include broad, risk-based decision-making assessments.


While laws are likely to evolve in this space, the demands related to addressing AI risks require a “trust” driven approach rather than just a “legal” compliance approach to accountability. This approach evolves accountability based on legal requirements to accountability based on “fair processing” of data. In short, trust in more complex, more opaque systems that have impact on individuals will require more fair processing obligations on the organisations using these systems.


Since the release of the IAF’s report, further study has illuminated the key role impact assessments play in a demonstrable governance model. To understand this issue more completely, the IAF took a deeper look at some of the organizations studied in the first report. However, this study quickly evolved into the role the adoption of AI and the associated advanced governance model was going to play. By extension, the drivers of “responsible” AI indicate that the push for a different and more expansive governance model is likely to come from the AI community and not the data protection community. Drivers include organizations following the example of today’s leaders and increasingly will include mitigating reticence risk as organizations seek to make risk-based decisions related to the adoption of technologies involving AI.


The implications are clear. These new requirements will require new skills, roles, and capabilities in organizations. They also mean new skills and resources will be required by regulators, especially as laws and associated enforcement models get developed.


To enable trust-based AI that is demonstrably fair, organizations and regulators will need to evolve. The IAF’s new report, The Road to Expansive Impact Assessments addresses the role of expansive impact assessments. It first takes a deeper look at some of the original organizations studied as well as a view of demonstrable accountability by data protection regulators. It then goes on to outline the role assessments play and to analyze the drivers of more complex assessments such as Algorithmic or AI impact assessments (AIA) and then concludes with a deeper look at the role of AI and its associated governance needs as a driver of this trend.


The appropriate structure and mix of regulation and enforcement will require considerably more thought and further work. However, for all the trends noted in this report, organizations should be developing more robust governance systems, including more expansive impact assessments, both to enable their business strategies and to inoculate themselves from the likely direction of regulatory requirements

Comentarios


Recent Posts

bottom of page