The Guardian has a story that should cause terror to those who design legal software without properly testing it, as well as those who say we have to limit practice to fully trained lawyers.
As the Guardian reports, the online version of Form E has been in use for divorces in the UK since April 2014.
One particular paragraph, numbered 2.20, which is suposed to produce totals, fails to reflect the minus figure of final liabilities entered earlier on, producing a simple mathematical error. If a party had significant debts or liabilities, they were not recognised or recorded on the electronic form, potentially inflating their true worth. Distorted net figures of applicants’ assets were therefore being produced.
God only knows how many lawyers, supervised paralegls, judges and court staff have used the form in the last 20 months, without any of them noticing what should really be an obvious failure, like failing to deduct prior payments on a bill.
And, guess what, the error was not caught by a barrister, or a solicitor, or any of those listed above. Rather it was caught by a McKenzie Friend, a lay expert who under the law in most of the Commonwealth is allowed to help litigants, even in court. For more info on the concept (which has been influential in the US in reducing anxiety about nonlawyer practice, see here.
The first obvious point is that legal software should be tested and tested and tested. I remember back when we were doing some of the first online legal forms, it was really hard to get programmers to understand just how bad errors could be. Maybe we need to put in place appropriate standards for automated forms that include rigorous testing and certification.
The second is that we have to get over our idea that lawyers are best for everything. As CJ Jonathan Lippman put it: “sometimes an expert non-lawyer is better than a lawyer non-expert.” Although here is seems that an expert nonlawyer was better than an entire expert lawyer profession.
I hope that this episode will not slow the adoption of these online tools, merely help make sure that they are done right. The irony is that in the long term such forms and processes should reduce math errors, rather than increase them — but only if the programmers and testers know what they are doing.