Six ways (and counting) that big data systems are harming society

3. Discrimination
As corporations, government bodies and others make use of big data, it is key to know that discrimination can and is happening – both unintentionally and intentionally. This can happen as algorithmically driven systems offer, deny or mediate access to services or opportunities to people differently.

Some are raising concerns about how new uses of big data may negatively influence people’s abilities get housing or insurance – or to access education or get a job. A 2017 investigation by ProPublica and Consumer Reports showed that minority neighborhoods pay more for car insurance than white neighborhoods with the same risk levels. ProPublica also shows how new prediction tools used in courtrooms for sentencing and bonds “are biased against blacks”. Others raise concerns about how big data processes make it easier to target particular groups and discriminate against them.

And there are numerous reports of facial recognition systems that have problems identifying people who are not white. As argued here, this issue becomes increasingly important as facial recognition tools are adopted by government agencies, police and security systems.

This kind of discrimination is not limited to skin color. One study of Google ads found that men and women are being shown different job adverts, with men receiving ads for higher paying jobs more often. And data scientist Cathy O’Neil has raised concerns about how the personality tests and automated systems used by companies to sort through job applications may be using health information to disqualify certain applicants based on their history.

There are also concerns that the use of crime prediction software can lead to the over-monitoring of poor communities, as O’Neil also found. The inclusion of nuisance crimes such as vagrancy in crime prediction models distorts the analysis and “creates a pernicious feedback loop” by drawing more police into the areas where there is likely to be vagrancy. This leads to more punishment and recorded crimes in these areas.

4. Data breaches
There are numerous examples of data breaches in recent years. These can lead to identity theft, blackmail, reputation damage and distress. They can also create a lot of anxiety about future effects. One study discusses these issues and points to several examples:

· The Office of Policy Management breach in Washington in 2015 leaked people’s fingerprints, background check information, and analysis of security risks.

· In 2015 Ashley Madison, a commercial website billed as enabling extramarital affairs, was breached and more than 25 gigabytes of company data including user details were leaked.

· The 2013 Target breach in the US resulted in leaked credit card information, bank account numbers and other financial data.

5. Political manipulation and social harm
Fake news, bots and filter bubbles have been in the news a lot lately. They can lead to social and political harm as the information that informs citizens is manipulated, potentially leading to misinformation and undermining democratic and political processes as well as social well-being.

One recent study by researchers at the Oxford Internet Institute details the diverse ways that people are trying to use social media to manipulate public opinion across nine countries.

6. Data and system errors
Big data blacklisting and watch-lists in the U.S. have wrongfully identified individuals. It has been found that being wrongfully identified in this case can negatively affect employment, ability to travel – and in some cases lead to wrongful detention and deportation.
In Australia, for example, there have been investigations into the government’s automated debt recovery system after numerous complaints of errors and unfair targeting of vulnerable people. And American academic Virginia Eubanks has detailed the system failures that devastated the lives of many in Indiana, Florida and Texas at great cost to taxpayers. The automated system errors led to people losing access to their Medicaid, food stamps and benefits.

We need to learn from these harms. There are a range of individuals and groups developing ideas about how data harms can be prevented. Researchers, civil society organizations, government bodies and activists have all, in different ways, identified the need for greater transparency, accountability, systems of oversight and due process, and the means for citizens to interrogate and intervene in the big data processes that affect them.

What is needed is the public pressure and the political will and effort to ensure this happens.

Joanna Redden is Lecturer in Critical Data Studies, Co-Director Data Justice Lab, Cardiff University. This article is published courtesy of The Conversation (under Creative Commons-Attribution / No derivative).