Do pervasive watching and recording cause risks to individuals and society that justify new legislation specific to this risk?
Digital age legislation in the 21st century, to be effective, must target specific human interests that currently are, or in the future will be, at high risk of being abused. Without understanding the actual or highly likely wrongs to be fixed, legislation misses the target. Complaints from both organizations and individuals that the EU General Data Protection Regulation (GDPR) and California’s new privacy laws create huge burdens and complexity, with limited improvements or benefits for individuals, are indicative of this problem. Saying privacy needs to be improved is insufficient. The links to harms need to be identified and stated. European policymakers promised that the GDPR would be risk based, and only now is the question “risk of what” arising.
The Information Accountability Foundation (IAF) is holding a workshop addressing “Risk of What” on September 22. In preparation for that workshop, the IAF team has been exploring numerous ways of homing in on the intrusions and potential risks and benefits in a highly observational world that drive the predictive sciences which in turn lead to decisions impacting individuals.
As the team talked and wrote, the IAF reduced to five the topics that would help the IAF community focus on the fundamental question of “Risk of What.” One of those topics is what boundaries must exist where observation is central to the way products and services actually work and are delivered.
This concern with observation is not new. In the United States, Section 652 of The Second Restatement of the Law of Torts sets forth the privacy tort, Intrusion Upon Seclusion, which has been adopted by most states in their case law. The European Union Charter of Fundamental Rights protects freedoms, including Article 7’s “Respect for Private and Family Life.” However, to take advantage of these protections, the individual needs to be aware of the observation.
The IAF team’s exploration of the question of “Risk of What” led them to question whether a generation of accelerating detailed observation of individuals by both the public and private sectors has led to a new question – whether a law, dedicated to setting the boundaries for watching individuals and generating data about those individuals, is necessary? This question is being asked in the context of:
(1) more complex processing where individual control is less effective in governing the market and fair processing is the most likely objective for privacy law, and
(2) observation becoming more and more central to how things function (e.g., defibrillators and pacemakers are imbedded in people’s chests, sensors in cars people drive report back to car manufacturers, and home smart devices create shopping lists, track the homeowners’ sleep, and re-order TV channels based on who is watching).
Thus, to answer the question “risk of what,” the key questions on observation that will be asked on September 22 are:
What is out-of-bounds observation, and does it merit its own statutory law — a law specific to observation?
Many contemporary technologies (e.g., mobile smart phones and apps, smart medical devices, smart home devices, water metering devices, and smart cars) require observation to work and improve. Does this need to observe make legislation nuanced?
Is a possible approach legislation which prohibits some observation and limits the application of much of the data that comes from observation?
Observation done by highly targeted ads have made individuals skittish about the number of organizations that “watch” and track them and fearful about where else that data may be reused or sold. At the same time, they are anxious that only a few organizations seemingly dominate observation and turn that observation into data which then are transformed into information, insights and finally action.
The IAF’s model legislation, The FAIR and OPEN USE Act, does not directly address issues of observation and tracking. Most privacy laws treat these issues as simple data collection and minimization issues. California addresses the issues as matters of online tracking and data sales. To be effective, the law should begin with the basic questions of ‘where can I watch’, ‘what can I record’, and lastly ‘are there boundaries with what I can do with what I have seen and recorded.’ Privacy laws usually deal with part of the last issue, what can I do with the recorded data, but usually do not address the rest of the issues.
The process of turning observation into data and data into information also leads to knowledge. Knowledge, for better or worse, drives mankind forward. The question is whether a separate law is needed on the extent of the permissible watching and recording of individuals as part of a harms-based risk avoidance system of governance?
The IAF is uncertain of the answer but believes the question needs to be asked.
Comments