Steven Brill offers this New York Times book review of Noise: A Flaw in Human Judgment by Daniel Kahneman, Olivier Sibony and Cass R. Sunstein (NYT 5/18/21), here.  The book discusses noises in human judgment defined as “unwanted variability in judgments.”  Essentially, it discusses how noise prevents consistency in judgments where consistency should have a high value.

Criminal sentencing draws the authors’ attention.  Readers of this blog will recall that prior to the federal Sentencing Guidelines in the 1980’s, federal sentencing was much a crap shoot, with sentencing for similar crimes varying all over the lot.  Some described sentencing as the wild, wild west.  Then the Guidelines came along to bring more consistency by providing somewhat objective matrices could calibrate a sentencing range.  Then, United States v. Booker, 543 U.S. 220 (2005) was decided that returned sentencing to more discretion for the judge.  As some have said Booker brought the the wild, wild west back to sentencing, so long as the sentencing judge can come up with some minimum rationale for the usually downward variance than can pass a laugh test if the Government or the defendant appeals the out of Guidelines sentence.

Brill, here, a lawyer and author of books and commentary on the law and related subjects, starts the book review with the following:

A study of 1.5 million cases found that when judges are passing down sentences on days following a loss by the local city’s football team, they tend to be tougher than on days following a win. The study was consistent with a steady stream of anecdotal reports beginning in the 1970s that showed sentencing decisions for the same crime varied dramatically — indeed scandalously — for individual judges and also depending on which judge drew a particular case.

Brill notes that the authors claim that apparently unreconcilable inconsistences are about noise which they define as “unwanted variability in judgments.”

Consistency equals fairness. If bias can be eliminated and sensible processes put in place, we should be able to arrive at the “right” result. Lack of consistency too often produces the wrong results because it’s often no better, the authors write, than the random judgments of “a dart-throwing chimpanzee.” And, of course, unexplained inconsistency undermines credibility and the systems in which those judgements are made.

As the authors explain in their introduction, a team of target shooters whose shots always fall to the right of the bull’s-eye is exhibiting a bias, as is a judge who always sentences Black people more harshly. That’s bad, but at least they are consistent, which means the biases can be identified and corrected. But another team whose shots are scattered in different directions away from the target is shooting noisily, and that’s harder to correct. A third team whose shots all go to the left of the bull’s-eye but are scattered high and low is both biased and noisy.

(Side note, from my days in Army basic training, I learned that shots all over the place are a problem, but a consistent pattern of off-target shots can be easily corrected and turn a 17 1/2 year old (that was me) who had hardly ever shot a weapon into a marksman; of course, it is said sometimes that consistency is the hobgoblin of small minds, but when shooting for a target consistency even off-target consistency is good.)

Brill notes that, in some cases, noise reduction techniques which the authors call “decision hygiene” can be useful to reduce the undesirable noise.

Some decision hygiene is relatively easy. “Occasion noise” — the problem of a judge handing out stiffer sentences depending on whether a favorite sports team won or lost or whether it’s before or after lunch (yes, studies have found that, too) — can, like bias, be recognized during a “noise audit” and presumably dealt with. “System noise,” in which insurance adjusters, doctors, project planners or business strategists assess the same facts with that unfortunate variability, requires a more energetic decision hygiene.

I suspect that the type of noise illustrated in federal sentencing is more the latter type of noise because federal judges would resist the noise audit.  (Having said that, I suspect that some agency in fact does gather the data that could be analyzed to identify some of the noise among judges as a group and zero in on particular judges who let noise affect their sentencings; whether those judges might be amenable to such noise audits, is another issue; based on my experience with various judges, I can speculate that some would and some would not.)

I haven’t read the book yet, so the foregoing comments are only my review of the book review.  But I do plan to read it.