Progress on Improving Judicial Evaluation Surveys

I recently posted about some disturbing research strongly suggesting that the results of judicial evaluation surveys reflect cultural biases against women and minorities.

I am glad to be able to report that there is some important work going on to address this problem. Jenny Elek and David Rottman at the National Center for State Courts and Brian L. Cutler at the University of Ontario Institute of Technology have reviewed these surveys against recent research into the use of surveys in work performance evaluation generally, and discovered that the surveys often are inconsistent with recent research into how these surveys can be made more accurate.  For example, they found many instances of double barreled questions such as “Uses common sense and is resourceful in resolving problems that arise during proceedings,” as well as questions that did not refer to direct observations.  Moreover, there were significant problems with sampling and other survey methodologies.  In other words, these are fixable problems.  (The general point is that when a survey is badly designed, people will not be properly focused on the behavior od judges, and will therefore be more likely to rely on their unconscious biases, rather than their knowledge.)

The three will shortly be publishing a paper on their findings, and are also working with a state on constructing and administering surveys that would not suffer from these defects.

This sounds very important, and I hope that states will soon be paying serious attention to this issue.

Advertisements

About richardzorza

I am deeply involved in access to justice and the patient voice movement.
This entry was posted in Judicial Ethics and tagged , . Bookmark the permalink.

One Response to Progress on Improving Judicial Evaluation Surveys

  1. Rebecca Gill says:

    This was the gist of my paper about the Nevada bar poll. The problem is in the implementation. The surveys tend to be anonymous, they tend to be implemented well after the relevant behavior was observed, and they tend to be completed in a hurry. They usually do not ask about specific behaviors, but instead about general concepts (e.g., Is the judge courteous?). All of these things increase the risks associated with performance evaluations, especially in sex-typed and race-typed occupations.

    Some states may do this better than Nevada’s bar poll does, but many of the questions aren’t that different from what we see in the Nevada poll. I’m working right now to gather data on some additional judicial performance evaluation programs, and I hope to have sunnier news to report.

Comments are closed.