Book Review - Weapons of Math Destruction
Article Summary
Book review - In an age when society seems governed by numbers, author and former Wall Street quant, Cathy O’Neil, argues that big data is increasing inequality and threatening democracy.
Weapons of Math Destruction
Mathematical models are increasingly used by governments and businesses to inform and streamline their decision-making processes. A model is a combination of an algorithm (a set of rules that tell a computer how to do something) and data. Often, this data is “big data”, which just means there’s a lot of it. This approach sounds like a great idea. But many of us relate to mathematics with a blend of trust and fear, and for this reason these models are accepted as articles of faith, unlikely to be interrogated or second-guessed.
Cathy O’Neil is a former Wall Street quant turned data scientist and activist in the Occupy movement. In Weapons of Math Destruction, she examines widely-used mathematical models that crunch big data. Her book is topical in light of the recently exposed weaponisation of Facebook’s algorithms and data by bad actors to influence the results of elections in the US and Europe.
O’Neil’s focus is on a particular class of models that she terms “Weapons of Math Destruction” (WMDs). These models are opaque, accountable to no one, and operate on a large scale. She contends that WMDs increase inequality and reinforce discrimination against the most vulnerable members of society, in particular the poor. They take the form of scoring systems and statistical profiling that operate as gatekeepers of access to insurance cover, credit, employment opportunities, and medical cover. In the US they are used in finance, higher education, the criminal justice system, screening of applicants for minimum wage jobs, and to target advertising at vulnerable consumers.
Models are supposed to be impartial, blind to gender, race and privilege. They are often deployed in order to reduce the bias of institutions such as courts. The truth is, however, that computers reflect the prejudices and beliefs of the people programming them. The data selected for a model’s algorithm to run on can also reflect bias. In models that screen job applicants, for example, using credit score as a proxy for “responsibility” is common. The effect of this is twofold: first, it makes people with a poor credit record less likely to get a job, and thus more likely to keep their poor credit record. Second, it ignores the many factors other than “irresponsibility” that could cause an individual to accumulate a poor credit record – such as illness, a death in the family, or retrenchment – and further punishes people who may have experienced bad luck or personal trauma.
South Africa has not yet seen the same widespread adoption of these models, but there is certainly growing interest from business. Predatory lenders are able to target potential borrowers based on their social media or other internet activity, for example. The ethics and repercussions of marketing high-interest loans to people who have clicked on web links suggesting that they are in financial difficulty, however, bear careful consideration.
What can we do to mitigate the social impacts of models? Do we ever get a free pass to suspend our critical and ethical thinking in the name of modern technology and efficiency? O’Neil provides guidance. “Big Data processes codify the past,” she says. “They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide.”
Share
Did you enjoy this article?