Big dataSix ways (and counting) that big data systems are harming society

By Joanna Redden

Published 22 December 2017

There is growing consensus that with big data comes great opportunity, but also great risk. But these risks are not getting enough political and public attention. One way to better appreciate the risks that come with our big data future is to consider how people are already being negatively affected by uses of it. We need to learn from these harms. There are a range of individuals and groups developing ideas about how data harms can be prevented. Researchers, civil society organizations, government bodies and activists have all, in different ways, identified the need for greater transparency, accountability, systems of oversight and due process, and the means for citizens to interrogate and intervene in the big data processes that affect them. What is needed is the public pressure and the political will and effort to ensure this happens.

There is growing consensus that with big data comes great opportunity, but also great risk.

But these risks are not getting enough political and public attention. One way to better appreciate the risks that come with our big data future is to consider how people are already being negatively affected by uses of it. At Cardiff University’s Data Justice Lab, we decided to record the harms that big data uses have already caused, pulling together concrete examples of harm that have been referenced in previous work so that we might gain a better big picture appreciation of where we are heading.

We did so in the hope that such a record will generate more debate and intervention from the public into the kind of big data society, and future we want. The following examples are a condensed version of our recently published Data Harm Record, a running record, to be updated as we learn about more cases.

1. Targeting based on vulnerability
With big data comes new ways to socially sort with increasing precision. By combining multiple forms of data sets, a lot can be learned. This has been called “algorithmic profiling” and raises concerns about how little people know about how their data is collected as they search, communicate, buy, visit sites, travel, and so on.

Much of this sorting goes under the radar, although the practices of data brokers have been getting attention. In her testimony to the US Congress, World Privacy Forum’s Pam Dixon reported finding data brokers selling lists of rape victims, addresses of domestic violence shelters, sufferers of genetic diseases, sufferers of addiction and more.

2. Misuse of personal information
Concerns have been raised about how credit card companies are using personal details like where someone shops or whether or not they have paid for marriage counselling to set rates and limits. One study details the case of a man who found his credit rating reduced because American Express determined that others who shopped where he shopped had a poor repayment history.

This event, in 2008, was an early big data example of “creditworthiness by association” and is linked to ongoing practices of determining value or trustworthiness by drawing on big data to make predictions about people.