News

The algorithms of justice involve unpalatable trade-offs

One of the things that makes policymaking easier in the modern world is that we know more. We have better information about outcomes, a better understanding of what works and what doesn’t and — thanks to advances in technology — we can use algorithms and machine learning to make better-informed decisions.

But better-informed decisions aren’t necessarily the same as “better decisions” and they definitely aren’t the same as “more palatable” ones.

Take, for example, the case of Sean Hogg, the cause of a recent political row in Scotland. Hogg, who at age 17 raped a 13-year-old girl, has been sentenced to 270 hours of community service because, under Scottish sentencing guidelines, judges are instructed to take the age of the offender into account.

Sentencing guidelines are, in many ways, the most common form of “algorithm” in use in public policy today, although we don’t often think of them this way. We run a series of data points — the nature of the crime, the circumstances of the offence, various biographical details about the offender and the victim — into the machine to produce a set of options for the presiding judge to consider.

The algorithm that produced the Hogg verdict is a good case study of the broader challenges involved in using algorithms in public policy. We know that many prisons operate as “business schools” of crime: they provide social networking and mentoring options and some people leave more serious criminals than when they went in. As such, we have good reason to want to avoid jailing first-time offenders where possible. And we know, too, that although there is no hard-and-fast rule about when our brains are fully developed or we reach “full maturity”, it broadly occurs in the twenties. So there are good arguments for handing out fewer jail sentences to first-time offenders, particularly those under a certain age.

But many of us feel an instinctive sense that while sending people to prison early should generally be avoided and though a 17-year-old might make worse decisions than they would aged 27, any rape, let alone that of a child, is an abhorrent crime that ought to carry particularly severe sentences. Our existing understanding of the data says one thing, but our moral intuition says another.

One response to policy failures of this type is to tweak the algorithm: to increase the sentence length or to dismantle or weaken some of the protections we’ve installed on the grounds of age. That is part of why the rise of algorithms and big data is exciting for public policy: we can better use evidence to shape our policymaking, and more easily understand why we’ve reached a conclusion we don’t like.

But although sentencing guidelines are a good example of algorithmic logic in public policy, they are in some respects one of the easiest examples. We’ve always had to trade off between punishment, deterrence, the maturity of offenders and rehabilitation in criminal sentences. In many ways, technology gives these old debates a new level of precision. While policymakers have long been divided over the right balance between individual responsibility, reducing overall crime and justice for specific ones, we can now debate the exact weighting each should be given: even if we conclude that the answer is “none at all” in cases like Hogg’s.

Where it becomes more complex is when we have better information with the potential to change not only how informed we are, but the debate about the decisions we make. We know, for example, that in any healthcare system there is a degree of triage: clinicians make decisions about the viability of one patient or another, one recipient of organ donation over another. What if the data shows that people who are more affluent are more likely to benefit from an organ donation, precisely because of their economic advantages? Should we include that in our decision-making processes or not?

The major advantage of the era of better information and better tools with which to handle it is that we can, to a greater extent than ever before, quantify the consequences of our choices. But it doesn’t change the fact that we will often have to choose between outcomes we don’t like, and that while new data sources can better inform us, they might also shape our decisions in ways we don’t like.

One temptation for governments will be to have those debates behind closed doors: to be vague about what the data tells us and to keep adjusting algorithms in private. But one benefit of the era of big data is the ability to reach decisions in a more deliberative fashion — to discuss plainly what trade-offs are involved. That is worth fighting for.

stephen.bush@ft.com

Articles You May Like

Private equity payouts fell 50% short in 2024
This billionaire is betting artificial intelligence will choose your meals for you in the future
Assad’s Syrian stronghold prepares for life after the regime
Florida bond fight continues three years after debt was validated
Moody’s upgrades Minneapolis to Aaa