Canada and the United Kingdom are putting forward fair processing ideas that I believe will lead to a new direction in the way data protection is being thought about. These ideas follow a trend set when Singapore revised its data protection law making legitimate interest more functional and directly linked to accountability.
Why is this new direction so important? The linkage between probabilistic programing, in its most advanced form, and the protection of individuals from harm is what is at issue in an age of data driven decision making and almost unlimited computing power. Advanced probabilistic programing includes but is not limited to artificial intelligence and the processes, like machine learning, that are used to make AI work. Probabilistic programing, when used in its most beneficial manner, saves lives in both medicine and national security and improves business and service processes that enhance both human existences and corporate profits. Used in a careless or malevolent manner, probabilistic programing at best strips human self-determination and at worst causes loss of life. This truth requires legislation be more explicit in its achievable objectives, be more flexible in its processes, and be more adaptable in its enforcement.
The draft legislation most interesting to me is Canada’s Bill C-27, “An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act.” C-27 links the governance of personal data with the regulation of AI that relates to human behavior.
Bill-27 has three parts. The first part, the Consumer Privacy Protection Act (“CPPA”), will replace the current private sector law, PIPEDA, and the third part, the Artificial Intelligence and Data Act, will govern AI. This blog is not intended to be a review of C-27; others have effectively done this.
The intent of the proposed CPPA is to govern the protection of personal data while recognizing the need “for organizations to process data for purposes a reasonable person would consider appropriate in the circumstances.” So, the proposed legislation establishes the concept of context and the expectations of reasonable persons. The privacy and data protection part of this bill then translates the obligation that organizations be accountable into a comprehensive privacy management program. The bill also establishes that processing can only be for defined purposes and that those purposes must be reasonable. The required privacy management program must be staffed by persons with the skills to determine whether described purposes are reasonable. Those purposes must be public so the organization can be held to account. Being Canada, with a long history of consent being a privacy process requirement, the bill is consent based. Bill C-27 does have an expanded set of exceptions that are very accommodating to modern processing. The first is an exception to consent for business operations. The first of those business operations is business activities as long as those business activities do not include processing for the purpose of “influencing the individual’s behaviour and decisions.” Another business operation is a Singapore like approach to legitimate interests. Here too there is the prohibition on influencing. However, political parties are not covered by the draft legislation, and politics is where the manipulation that C-27 intends to protect against has been most controversial. Included in the list of business activities exception is data security. There are additional business operations that include de-identification and internal research, with internal research without consent only allowable if data has been de-identified. These exceptions create a gateway for using data for AI processes such as machine learning where influencing is not the intent.
Part three of Bill C-27 regulates artificial intelligence in the private sector, but it is limited to AI that impacts people and meets the definition of a “high impact system.” The parameters for a high impact system will be defined by a new office within the Ministry for Innovation, Science and Economic Development. Bill C-27 defines both bias and harm, providing some clarity for businesses that must determine whether AI is highly impactful.
Turning to the United Kingdom, the Department for Digital, Culture, Media & Sport response to its consultation “Data: a new direction” (DCMS Proposal) is an example of a report foretelling legislation. The intended purpose of the DCMS Proposal is legal reform “to secure a pro-growth and trusted data regime.” The DCMS Proposal is not the legislation to reform the UK GDPR, but it anticipates what might be seen later this year in a legislative proposal. Chapter 1 looks at reducing barriers to responsible innovation. The topics in Chapter 1 range from a clearer definition for research, broader consent, and simplifying legitimate interest as a legal basis. Chapter 1.5 looks at AI and machine learning. Specifically, Chapter 1.5 discusses fairness as it is related to AI, punting the details to a forthcoming paper on AI governance. Chapter 2 is on reducing burdens on business and delivering better outcomes for people. Like Canada’s C-27, the DCMS Proposal would create a requirement for organizations to have “privacy management programmes.” These programs would mirror the mature accountability programs suggested in Singapore to take advantage of a flexible legitimate interest exemption to consent.
There is growing agreement that modern processing is too complex to place the burden on individuals to govern data uses with their informed choices. The GDPR is intended to be permissible uses based, but it still depends, to a great extent, on individual decisions to consent or object. State legislation in the United States has tended to be data subject rights focused. Neither approach is straight forward in governing probabilistic programming. Both Canada and the United Kingdom are confronting that challenge, and that is a positive trend. U.S. Congress Bill H.R. 8152 is due for a markup in the full House Energy and Commerce Committee, but specific language would need to be added to H.R. 8152 to match the efforts in Canada and the United Kingdom.
The interplay in Canada between AI and privacy law, the pro-growth and pro-data environment in the United Kingdom, and the possibility of a privacy law in the U.S. and its similarity, if any, to developments in Canada and the United Kingdom will be explored more fully in an IAF Policy and Strategy session on July 21. If you have not received an invitation to the call, send an email to info@informationaccountability.org.
Comments