top of page
Writer's pictureThe IAF Team

Summary of IAF Comments on Proposed EU AI Regulation

Using AI to enable the creation of  better insights and for better predictive capabilities a is a clear objective of the European Commission’s proposed Regulation laying down harmonized rules on artificial intelligence (“AI Regulation”).  Fundamental rights and interests, such as employment, health, education, and the ability for smaller businesses to find a competitive advantage, are impacted by trusted digital innovation that the Commission is attempting to foster.  The Information Accountability Foundation submitted comments to the Commission on 28 July 2021 with the clear purpose of fostering this innovation necessary to serve the full range of fundamental rights and interests for Europeans, and for the AI Regulation to be an example to the rest of the world.  It is the IAF’s view that several areas of the AI Regulation require improvements for it to achieve its objective. The IAF’s comments were split into two parts:

  • The first section addressed the interface between the objectives of the AI Regulation and the GDPR.  AI conducted with personal data is first covered by the GDPR, and if prohibited or challenged by the GDPR will be difficult to use in an AI application. This result is at odds with the objective of the AI Regulation and could be resolved through some modest revisions to the GDPR.

  • The second section covers the risk management components of the AI Regulation and whether they effectively address risk to people from processing or making the choice not to process data. The IAF thinks a more effective approach to risk management could be achieved, and that approach has been outlined in the IAF’s model legislation, The Fair and Open Use Act.[1] 


This blog, the first of three, is a summary of the IAF comments as part of the EU Commission’s consultation process. 


Summary of Comments:

Better predictions are the product of probability tools that rest on quality data well curated and responsibly stewarded.  The nomenclature for this process is “thinking with data.”  Artificial Intelligence (“AI”) is the substantial next step in organisations thinking with data.  Thinking with data results in outputs or “insights,” and the nomenclature for that process is “knowledge discovery.”  Knowledge discovery is where AI begins and is the source for better predictions. New knowledge, once created through knowledge discovery, creates the pathway to the desired future of better operations, resource allocation, and environmentally beneficial outcomes.  The nomenclature for this process is “acting with data.” 


This application of learning may take place either directly by people or through some degree of automated decision makingThis concept of a two-step approach is built into the AI Regulation in the delineation of providers of AI systems (thinking with data) and users (acting with data).  AI development involves experimenting or exploring with data for the potential of identifying signals or trends which lead to the opportunity to create a model whose outcome can drive decision making.  There is risk to people at both phases, but they may be very different, and arguably the risks relative to the impact to people are significantly less at the knowledge creation stage. However, the identification of risk requires robust assessments of the right types at the right time.


The GDPR should make possible technology applications such as AI, and in the IAF’s view, there are unresolved tensions between the AI Regulation and the GDPR.


Proposed Changes to the GDPR

Knowledge discovery with personal data meets the current GDPR definition of profiling, and at least part of the AI process is knowledge discovery.  Some AI solutions which are likely to make automated decisions based on processing that uses personal data that goes through a model developed using AI would meet the definition of fully automated decision making.  There are solutions for the specific areas of the GDPR that are problematic for AI.

  • Article 5 requires that processing be lawful and be fair and that data use be consistent with the purpose for which it was collected.  AI training data can come from many different sources so assuring that data was collected for a purpose consistent with AI training is difficult.  Instead, knowledge discovery should join scientific research as being recognized as a compatible purpose under Article 5.  Doing so would recognize that research uses of data, whether they meet the scientific test or not, raises fewer risks than the actual application of data to take an action that impacts individuals. 


  • Article 6 requires that data be processed only when there is a lawful basis.  The legal basis that most often would fit is legitimate interest, but legitimate interest only takes into consideration the legitimate interest of the controller and the interest of the data subject.  Training data typically is not about the interest of a single individual but rather the interests of a broader group impacted by the insights that come from the AI processing.  Therefore, scientific research and knowledge discovery should be added to Article 6 as legal bases G and H.  To protect these new legal bases from misuse, appropriate conditions, particularly transparency related to the objectives and safeguards for processing, should be added as well.


  • Article 9, which covers the processing of special categories of personal data, has made the processing of data for scientific research purposes as well as for more general knowledge discovery quite difficult.  While scientific research is a compatible purpose under Article 5, if data has been collected for a different purpose, then the use of that data for scientific research requires consent if the data meets the special category test.  Instead, a pathway for using special categories of data in knowledge discovery and scientific research should be created in Article 9. 


  • Article 35 requires Data Processing Impact Assessments (DPIAs) when data processing poses a “high risk” to individuals.  There is no distinction between an assessment required for high-risk processing and the distinction between appropriate risk controls at the knowledge discovery stage and the application of insights that may have been derived at this stage.  Therefore, Article 35 should be amended to apply more clearly at the application of insights stage (use of data), where there is a much greater risk to individuals, and not at the knowledge discovery stage.


  • Article 22 says that “the data subject shall have the right not to be subject to a decision based solely on automated processing . . . which produces legal effects concerning him or her or similarly affect him or her.”  This language may preclude AI processing before the jurisdiction of the AI Regulation even comes into effect.  Therefore, Article 22 should be revised to describe more clearly profiling and automated decision making.  Profiling is central to the knowledge discovery process.  As such, it should be subject to a DPIA to ascertain that processing is conducted in a legal and fair manner.  Automated decision-making is a separate process. It should be subject to assessments, and the risks from a particular automated decision-making process should be understood and documented. Currently, the GDPR confuses the connections between the two. 


  • Lastly, Recital 4, which is (should be) the heart and soul of the GDPR, states:

The processing of personal data should be designed to serve mankind.  The right to data protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights . . . .


The intent of this Recital has not been adopted in the application of the GDPR in areas such as the balancing required under a legitimate interest assessment. In the IAF’s view, this is a byproduct of failing to explicitly carry forward the Recital’s intent into a specific Article. This failure could be accomplished through revisions to Article 5 of the GDPR.


Risk Management Issues

There is a heavy focus on product safety that drives many of the risk management requirements in the AI Regulation and places less importance on the reality that AI is more about the implications of the data collected and produced and less about the product.  Instead, the AI Regulation should be adapted so that it applies to all AI application scenarios that would be risk assessed as opposed to applying only elements of the AI Regulation to a narrow set of defined high-risk scenarios. This change would allow for the tailoring of requirements to those applications that have the potential to create higher risk.


Furthermore, in the AI Regulation, risk management tools have been narrowed to conformity assessments and applied only to a small subset of high-risk applications.  A conformity assessment focused on product safety does not adequately address the data driven benefits and risks.  Moreover, conformity assessments are not consistent with the trend toward broader AI impact assessments or algorithmic impact assessments (AIAs).  In academic work, public sector developments and proposed legislation in other jurisdictions, the use of AIAs is suggested or proposed. For example, in Australia, Canada, Singapore and shortly Hong Kong, governments have or will be implementing processes and requirements for their public sectors to address ethical and other risk challenges of AI. These guidelines often include AIAs. Even before laws and regulations will drive the need for expansive impact assessments, public sector procurement rules will have an earlier impact.


Therefore, the IAF suggests:

  • Assessing the likely risk of an AI application should be required.  Doing so would allow for the extension of good data governance requirements to all AI applications. While many requirements included in the AI Regulation arguably should be applied to all areas of AI development relative to data, such assessments allow for a tailoring of requirements based on risk.

  • Broad based AIAs should be a mainstay requirement for all AI applications and should include the assessment of risk. Conformity assessment requirements as currently outlined in the AI Regulation could be incorporated into an AIA as appropriate as could aspects of a DPIA as required by Article 35 of the GDPR.


Conclusion

The GDPR needs to be revised to accommodate knowledge discovery, so the full range of fundamental rights are protected. Axel Voss’s white paper, “Fixing the GDPR:  Towards Version 2.0,” outlines some of the structural issues in the GDPR related to knowledge discovery.  The AI Regulation does include an exemption for special categories of data to measure for bias, so exemptions to the GDPR through the AI regulation are possible but not the optimal approach. 

The GDPR is not fine-tuned for AI, and the AI Regulation only covers “high risk AI.”  Instead, the AI Regulation relies on conformity assessments that are dependent on established standards.  Work in trustworthy AI suggests that all AI that touches on people should undergo AIAs.  It is only when an AIA and associated risk analysis is conducted that a full sense of the risks associated with data processing through AI can be identified and mitigated.


 

[1] The IAF developed a model legislative approach primarily as a communications tool as it is easier to engage with policy makers in the U.S. in “legislative” language and format. While the model legislation is drafted with a U.S. policy maker audience in mind, it is illustrative and parts and/or themes could be adopted in any other global legislative format

Comments


Recent Posts

bottom of page