A fascinating article in the Guardian is of relevance to anyone who builds or uses technology in the justice system.
After highlighting the range of decisions made by algorithms today, and their potential consequences, the writer, John Naughton, moves to the following challenges:
First of all, who has legal responsibility for the decisions made by algorithms? The company that runs the services that are enabled by them? Maybe – depending on how smart their lawyers are.
But what about the programmers who wrote the code? Don’t they also have some responsibilities? Pasquale reports that some micro-targeting algorithms (the programs that decide what is shown in your browser screen, such as advertising) categorise web users into categories which include “probably bipolar”, “daughter killed in car crash”, “rape victim”, and “gullible elderly”. A programmer wrote that code. Did he (for it was almost certainly a male) not have some ethical qualms about his handiwork?
His answer is an ethical code of conduct for those who develop algorithms. It’s a potentially powerful idea.
The first examples of the issue that I ran into were in Midtown Community Court, 20 years ago. We build the screen below:

Midtown Community Court Judge Screen — Rights Reserved
It was designed to focus judges, using color coding, on what were perceived to be the most important facts about the defendant. Those who know me will not be surprised to know that I recommended design that focused on social service and community ties needs and capacities, with the judge having to click to get such data as arrest detail and criminal record. Those who know judges will not be surprised to hear that judges wanted to see those at the same time. Indeed, the Chief Judge of Manhattan Criminal Court at the time suggested getting a big monitor on which all could be seen at the same time.
That was a simple one, with the intent, methods, and effect relatively transparent. It got trickier when judges asked us to developed individualized predictions as to compliance with the various available alternative sanctions, which we were able to do, using the statistically validated factors that we proved in already completed research.
Now, it is far far more complicated, with plans being discussed to apply algorithms to make triage decisions as to how courts put cases into tracks, and to make decisions as to how community based self-help programs decide what resources litigants need in order to have their claims heard.
Moreover, thought is being given to “self-learning” algorithms that will change based on outcomes achieved.
So, let’s try to take a cut at what a code of ethics might look like.
- Every algorithm that makes decisions likely to have a significant impact upon the lives of human beings should be accompanied by a plain language explanation of the purpose of the algorithm, the factors and data considered, and how they are processed by the algorithm.
- The plain language algorithm explanation should be made available to and considered by the policy decision-maker for the project.
- If the use of the algorithm raises or may raise legal issues, it should be referred to counsel for review.
- In governmental and nonprofit organizations, the plain language document should be made available to the general public, unless a showing is made that publication would lead to harm such as gaming of the system.
- In the case of self-adjusting algorithms, the plain language explanation of the algorithm should explain how the adjustments will occur, as well as any safeguards that have been included to prevent excessive or overly focused adjustments. The system should include periodic reporting and human review of these adjustments.
- Algorithm designer should either use only scientifically validated factors, or should build into the operation of the algorithm feedback systems that minimize the risk that significant decisions are based on speculative factors.
- Algorithm designers should take particular care that their algorithm assumptions do not operationize either intended or unintended human bias, or act to reproduce or solidify patterns that are themselves the result of such bias or unfairness. Such efforts should be described in the plain language description.
- Should supervisors knowingly require algorithm developers to deploy algorithms that fail the test described in (7) above, and those algorithms negatively impact individuals protected under anti-discrimination law, such acts should be considered as in violation of those laws.