A paradox exists due to the growing use of artificial intelligence (AI) enabled technology. The adoption of AI by more and more organizations increasingly has led to the recognition that AI has some unique challenges and risks or, at the very least, unintended consequences. At the same time, these organizations are lagging relative to building and adopting enhanced governance processes required to manage this risk and achieve the responsible use of AI. As Kay Firth Butterflied noted in the recent World Economic Forum paper, Building an Organizational Approach to Responsible AI (mit.edu), “To engineer successful digital transformation with AI, companies must embrace a new approach to the responsible use of technology. . . . AI differs from many other tools of digital transformation and raises different concerns because it is the only technology that learns and changes its outcomes as a result. Accordingly, AI can make graver mistakes more quickly than a human could.” AI’s capacity increases or amplifies the risk due to its speed and scale.
Why is there this lag in governance? As PWC’s Gain trust by addressing the responsible AI gaps reported, over 60% of respondents do not have an ethical AI framework nor have they incorporated the ethical principles in day-to-day operations. Only 12% of companies have their AI risk management and internal controls fully embedded and automated; 26% of companies have an enterprise approach that has been standardized and communicated; the rest have a siloed or non-standardized approach to AI risk management.
One explanation is that implementation is hard. Yet, this “explanation” is not a good excuse. As Tim O’Brien noted in his recent LinkedIn post, AI Ethics Has a Surplus Problem, there is an ever increasing supply of content and knowledge about potential harms and how to mitigate them. There are open source tools for everything from detecting/mitigating bias in machine learning to bringing intelligibility and explainability to opaque models for example tools available at Responsible AI Resources – Microsoft AI . There are countless free frameworks and coursework to support implementation of ethical practices from, for example, fast.ai and Santa Clara University’s Markkula Center. It may be that implementation just is hard, or another factor may be that many of these approaches are not written through the lens of how organizations actually would implement them.
A complementary aspect of implementing responsible AI is the use of impact assessments. Academics, NGOs and some policymakers increasingly are recommending using Algorithmic Impact Assessments (AIAs) as part of enhanced governance systems to assess the potential benefits, risks, and controls with the goal of achieving responsible and ethical AI. An AIA is another name for a more expansive impact assessment as outlined in the report of the Information Accountability Foundation (IAF), AI and the Road to Expansive Impact Assessments. AIAs are broader than, for example, what is required in Article 35 of the GDPR relating to Data Protection Impact Assessments (DPIAs) and what is outlined in the EU’s proposed AI Regulation relating to conformity assessments. Increasingly, there are AI-specific proposed bills in the U.S. and globally, and there has been much governmental debate and investigation as to how to govern the impacts (risks) of AI. These bills universally contemplate some form of impact assessment.
However, to date, there is no consensus on the structure and focus of an AIA, although many NGO organizations have released high-level proposals for structure. These high-level proposals, though valuable to the discussion around AIAs, have not addressed all key elements for a successful assessment. Nor have they been written from the vantage point of how business processes work.
The IAF has had a lot of experience in collaborating with both business and other stakeholders in designing assessments that go beyond what might be found in a simpler Privacy Impact Assessment. This work was first introduced with the Big Data Ethics Initiative: Assessment Framework (Part B) and then added to as part the work with the Hong Kong Privacy Commissioner in addressing Enhanced Data Stewardship EDIA. More recently, work in Canada, A Path to Trustworthy People Beneficial Data Activities, explored a practical way to address this specific data use scenario. A logical extension of this work and the study of demonstrable accountability at leading organisations lead IAF to develop a Model AI Impact assessment (AIA), a key part of enabling the responsible use of AI. The full account, including a sample assessment, can be found in the IAF report Evolving AI Impact Assessments (AIA). This report also highlights related governance capabilities that would help enable the application of responsible AI. IAF, in particular thanks Ilana Golbin, Director of Responsible AI at PWC for her strong and extensive contribution to this material.
How can organizations move forward? These assessments and governance models need to be adapted to fit into an organization’s existing processes and business models as well as their strategy and overall business objectives. Properly designed and implemented, AIAs offer a mechanism to address the complexities of AI solutions, including the ethical and legal aspects and the full range of impacts to all stakeholders. They can help the organization’s own goals of mitigating reputational damage and achieving trusted provider status with their stakeholders. AIAs also would serve the broader objectives and cover the regulatory aspects of requirements such as DPIAs. However, as organizations increase their use of AI, AIAs and associated enhanced governance will be required to satisfy the demands for the ethical and trustworthy adoption of AI.
Comments