Weapons of Math Destruction How Big Data Increases Inequality and Threatens Democracy

“The trouble is that profits end up serving as a stand-in, or proxy, for truth.” ~ Cathy O’Neill

By Court Skinner

In the not so distant past fallible human beings made decisions about other human beings – some beneficial, others less so, but such decisions were made more or less one at a time and if biased or prejudiced, it was usually obvious to the victim. Today increasingly such decisions are made by machines or computers using artificial intelligence, big data, algorithms that are typically based on information that is invisible to the “victim”, with feedback aimed only at improving profits rather than community and are used on a scale that impacts thousands if not millions of people leaving them with little or no recourse to change the outcomes. As O’Neil states, “These models, powered by algorithms, slam doors in the face of millions of people, often for the flimsiest of reasons, and offer no appeal. They’re unfair.”

Cathy O’Neil began her mathematical career as a tenure track professor of mathematics at Barnard which shares a math department with Columbia University. Later she became a “quant” for D. E. Shaw, a leading hedge fund in the finance industry. After watching machine intelligence on a grand scale ruin the lives of millions of people and seeing no lessons learned by the perpetrators of the damage she decided to do something about it. She has explored the impact of artificial intelligence on hiring decisions, work scheduling, lending, prison sentencing, police work, university admissions, car insurance and more to find WMDs or Weapons of Math Destruction that undermine the economy and our quality of life on a regular basis.

She evaluates the models used to make decisions on the basis of their opacity, damage and scale. If a car insurance customer is being rated for pricing on the basis, not of his or her driving record, but credit record and is not made aware of that, then the model is clearly opaque. Such a model on a large scale can do significant damage by sending the wrong signals to drivers and likely reducing credit ratings of good drivers or even forcing them out of their cars making their lives even more difficult. Such practices clearly meet all three criteria for a WMD.

Opacity is often achieved through the use of proxies or substitutes for data. A human described by a set of proxies is easier to model, but, of course, the model is far from accurate at describing the potential behavior of something as complicated as a human being. The perpetrators of WMDs may realize this, but as long as it is profitable they don’t seem to care. Hence biases that may be illegal can be buried in models that don’t ever use the explicit data and so can get away with biases that clearly have nothing to do with the potential for learning, paying back a loan or becoming a career criminal.

Sadly, the most harmful of these models are intentional schemes to avoid oversight and the legal norms that protect the average citizen. The models are often justified as bringing efficiency to decision making, but more often than not this comes at the price making many lives less efficient. It can be compared to a practice in the oil drilling industry called slant drilling which allows unscrupulous operators to suck the oil from their neighbor’s property. The hidden algorithms undermine the laws and regulations that are intended to make sure all are treated equally at a significant cost to the taxpayers.

How do we know when this is happening to us? You might start by reading this book. Despite the math that’s buried there Mrs. O’Neil makes it all clear with stories that cut to the heart of the matter. Knowledge gained can awaken us to the possibilities of being bamboozled by someone’s model and how following it would lead to bad decisions. O’Neil’s first story is of a school teacher fired for incompetence – along with 206 other teachers at her school – that turned out to be caused by a model based on test scores decreasing from a high level that had resulted from cheating by teachers in a prior grade. Fortunately for her she found a better job using human recommendations from people who knew a good teacher when they saw one. She was lucky, of course, to have that backup. The rest of us need to resist this scourge as a community and work together to make sure future decisions whether by humans or machines are based on relevant data that is interpreted to benefit the community not any individual entity.

All of this is not to say that machine assistance in decision making is always maleficent. We have been using statistics in baseball for years and new AI tools make this even more effective. As O’Neil points out there is feedback to improve the predictions, the data is open for all to see, and there’s no damage done unless you count some professional gamblers occasionally losing more than they win. In fact, I’d venture to say that in most instances where O’Neil has identified WMDs even minor tweeks would make the outcomes a positive rather than a negative. For this to be the case more often than not, however, we need to be more aware of how the models work and what changes to insist on. Feedback properly acted on will improve a model’s usefulness, but if ignored a “pernicious feedback loop” becomes a key component of the damage caused. You can check out O’Neil’s blog, MathBabe, for specific ideas on how to make this happen.

Related Reading

Thank You, Court Skinner!

Court Skinner Hot Tip

The Real Skinner on Puppy Linux

The Real Skinner on GnuCash

It’s the age of the algorithm and we have arrived unprepared.

Similar Posts