Can we build trust instead of the empty expectations of data subject control?
Every day we make new memories. Because we have freely available cloud space to store them and yet we have little to no time to reminisce upon and rehash these memories, we have become data hoarders. We have been encouraged to look forward to new memories, to buy new gadgets with more internal space and even more advanced cameras. We have been set into this kaleidoscopic motion; never stopping to ask an important question — why? A question that must be contextualized; a question that must be answered in different time and space.
Companies have told us that they are required by their policies (a commitment of aspiration between them and their users) to delete this data over a period of time; that we can always submit requests to delete and correct this data; that we can report them to supervisory authorities. But let us be practical about a few of things. Very few people keep track of whether these companies are compliant. It would be, even more, unreasonable to expect most under-resourced supervisory authorities to track the compliance requirements of all companies.
This approach lacks congruence– the web permeates borders but the law has been confined and restricted to its progenitor’s borders; technology (tech) companies are leveraging advanced systems while supervisory authorities are relying on traditional methods of implementing the law. But even more concerning is the power imbalance between data subjects and the tech companies.
We have been assuaged with assurances that we shall have control over our data but what does control mean? Is this control fixed in time and space? Why does control over personal data only come with burdens but not benefits? Black’s Law dictionary defines “control” to mean “To exercise power or influence over. More importantly, we must ask ourselves whether we actually have the power to exercise or even have any influence over the data practices of these tech companies.
To have control is to have visibility into present and future data practices but we don’t have visibility. How then do we, the users, have control? To have control is to be enlightened about the value of the subject matter being controlled. How can we casually assign control to persons that are not as digitally literate? Africa, for example, heavily relies on imported tech devices yet the population forms part of the least digitally literate with little to no understanding of these compliance requirements often designed in voluminous text.
The concept of control over data is slowly proving to be ever more unreasonable and perhaps unattainable — ceteris paribus. We have been burdened with the duty to take control over our personal data without the benefit of fully exercising control over this data. In one Ugandan case — Aida Atiku Vs Centenary Bank (HCCS 754/2020) — the plaintiff, an elderly woman, lost millions of Uganda Shillings she entrusted with the bank for safe custody, in turn, the court ruled that the Plaintiff ought to have taken steps to ensure that her personal data was not accessed. We have been told to curate the right passwords for all our different online accounts and are expected to remember all these passwords — else this might result in contributory negligence.
Why doesn’t the scope of control extend to commercial benefits of this data. Control over personal data should equally give us the choice to profit from access to personal data. This, largely, speaks to fairness. Let us make a comparison. In the 1950s a lady called Henrietta Lacks was diagnosed with cervical cancer. Eventually she died and her cells were harvested without her consent. The cells were used to develop some of the most monumental advances in medical history with pharmaceuticals and other companies profiting from these cells yet her family could hardly afford medical care.
On the other hand, Ted Slavin — a haemophiliac — was suffering from bouts of hepatitis B when his doctor told him that his body was producing something valuable. Coincidentally researchers around the world were working to develop a vaccine for hepatitis B, and doing so required a steady supply of antibodies like Slavin’s, which pharmaceutical companies were willing to pay large sums for[1]. Slavin sold his serum to anyone who wanted it. But because Slavin hoped to cure Hepatitis B, he later entered into a partnership with Baruch Blumberg, to freely access Slavin’s blood. It was Baruch that later discovered and developed the first hepatitis B vaccine.
Why don’t users have a bona fide choice to determine what this data can and will be used for? Customers have a choice to determine which products to buy based on the companies’ practices including child labour, environmental practices, to say the least. And yet a data subjects control over personal data seems to be confined. But to have control is to have freedom to choose and determine.
The importance of choice in determining and influencing how personal data is used cannot be over emphasized. Sharing personal data is highly risky — a fact that has been underscored, deliberately or otherwise. For instance, phone companies have allowed us to unlock our phones using facial recognition but this technique, efficient as it might be, comes with the risk of relinquishing one of the most vital aspects of our bodies.
Might you wonder whether advances in 3D technology might allow anyone to print a person’s facial features giving them access to any site that requires facial recognition biometrics? Might you wonder whether these facial features could be used in deep fakes? Might you wonder what the tech companies might use these features for in the long term — 100 years from now? What happens to this data in the next generation and what will the data be used for? Can one bequeath their rights to this data? Can we trust the purpose indicated in privacy policies? What happens when the purpose changes and the data subject is long dead? The more we are eluded into having control, the more we lose this control. These are the conundrums we must address when assigning control over personal data.
The spectrum within which we view the notion of control should be widened, both in time and space. Blanketly assigning control creates empty expectations and misperceived power over personal data. We must enlighten the masses about the value of this data but more importantly, protecting personal data must go beyond compliance and risk mitigation. We must foster data protection as a social value. We must advocate for fairness, equity, transparency and such ethical values to guide the collection, processing and use of personal data.
The current approach to control over personal data is breeding a contest between users and tech companies — us against them. This has resulted in companies finding ways to develop more esoteric, obscure and indiscernible ways to collect and process this data. Yet the data revolution continues to be greased with our data. Some of the greatest solutions to humanity’s problems will be solved by relying on vast amount of data. Therefore, we must appreciate the shared interests and build trust along these lines.