Weapons Of Math Destruction; Eight Years On
In 2016, Cathy O’Neil exposed to a mass audience how mathematical algorithms shape modern life, often in ways that reinforce inequalities and injustices.
Weapons Of Math Destruction was an uncomfortable read at the time it was published. However, more than eight years later, one of the most disturbing aspects of the book, is that while awareness of these problems has grown, they have become so normalised that they have lost a some of their impact.
Governments, businesses, and practitioners are, far too often, fully aware of the dangers of these algorithms. Recently, the prevalence of mass-market AI products have accelerated these dangers further.
Examples of issues, not unlike like those pointed out by O'Neil are still appearing thick and fast. In November 2024 in the UK, the ICO published a study highlighting how AI Recruitment Tools seriously disadvantage women, people of colour and the LGBTQIA+ community. In a separate article in August 2024, Lehigh university pointed out how AI in the mortgage application process can discriminate against black individuals; also adding there is a relatively easy fix for those who choose to listen.
These issues, once shocking, all too often feel like they are treated as an inevitable trade-off for technological progress.
Despite this growing body of evidence, governments in the U.S. and the UK seem reluctant to legislate. Seemingly prioritising economic growth and the AI arms race over the well-being of the very people who elected them.
The UK has positioned itself as an AI-friendly hub with minimal regulation, and the Trump administration is warning about the dangers of over regulation(!). Meanwhile, the European Union has taken a more proactive stance, introducing the AI Act to set clearer guardrails. But these disparities highlight the political will, or lack thereof, when it comes to AI governance. Policies may change with governments, but a governmental term is a long time in the context of current accelerations.
This reality makes Weapons of Math Destruction and O’Neil’s subsequent work more urgent than ever. While some efforts have been made toward AI regulation, the systemic risks she identified; opaque decision-making, self-reinforcing bias, and the lack of accountability remain largely unaddressed.
The problems highlighted in this book, and the ethical dilemmas they raise, are not just academic concerns; they are poised to define this era of technological advancement and perhaps even this period of humanity.
For this reason, if you have not read this book, now is as good of a time as any. Revisiting it with the benefit of hindsight feels like it brings fresh insights, as many of the dangers O’Neil warned about have played out in real time.
Since its publication, O’Neil has continued her advocacy by founding a consulting firm, ORCAA, which helps companies manage their algorithmic risk. Her voice has undoubtedly influenced the discourse on ethical AI, but the question remains: are businesses and policymakers actually listening?
It’s never too late to change our trajectory regarding ethical AI. But nearly a decade later, O’Neil’s warning remains as urgent as ever, and yet, it has been disconcertingly overlooked.