The Nighmare of Website Bias — Lack of Specific Intent, And Hard to Prevent

Back in the early civil rights days, the strongest argument against effective civil rights enforcement was the claim that employment discriminators, for example, were merely following the demands of the market when they hired the most “appealing” staff.  (For a recent general example (not apparently using that word), see here.)

The insidiousness of these risks, and how deeply engrained they are in a market economy is highlighted in the recent New York Times article by Claire Caine Miller.  It is called “When Algorithms Discriminate.”  My question is this — Can any really good profit oriented algorithm not discriminate, unless explicitly designed so it does not?”

As the article explains:

There is a widespread belief that software and algorithms that rely on data are objective. But software is not free of human influence. Algorithms are written and maintained by people, and machine learning algorithms adjust what they do based on people’s behavior. As a result, say researchers in computer science, ethics and law, algorithms can reinforce human prejudices.

Google’s online advertising system, for instance, showed an ad for high-income jobs to men much more often than it showed the ad to women, a new study by Carnegie Mellon University researchers found.

Research from Harvard University found that ads for arrest records were significantly more likely to show up on searches for distinctively black names or a historically black fraternity. The Federal Trade Commission said advertisers are able to target people who live in low-income neighborhoods with high-interest loans.

One explanation discussed in the article is that advertisers, for example, specifically ask for certain attributes, such as gender, zip code, etc, and that the “logical” result follows.

I would suggest a much deeper problem, that a well designed algorithm will inevitably discover and perpetuate bias.  For example, a good potential employee search algorithm, it would seem, should look at employment successes, find the correlates, and bump up in the results those who are more likely to succeed in the current social political and economic environment.  Similarly, a good pricing algorithm will look at things like how far people have to drive to a brick and mortar store in order to identify where the supply and demand curves interest for a particular potential buyer, and to do so will look at lots of information — all raising the price for those with less economic clout.  It’s the global extension of low income area supermarkets charging more.

The Times piece concludes:

It would be impossible for humans to oversee every decision an algorithm makes. But companies can regularly run simulations to test the results of their algorithms. Mr. Datta suggested that algorithms “be designed from scratch to be aware of values and not discriminate.”

“The question of determining which kinds of biases we don’t want to tolerate is a policy one,” said Deirdre Mulligan, who studies these issues at the University of California, Berkeley School of Information. “It requires a lot of care and thinking about the ways we compose these technical systems.”

Silicon Valley, however, is known for pushing out new products without necessarily considering the societal or ethical implications. “There’s a huge rush to innovate,” Ms. Mulligan said, “a desire to release early and often — and then do cleanup.”

I have to say that I do not think that it is enough to “design[] from scratch and not discriminate,” because it is about more than avoiding the explicit use of prohibited factors, or even identified correlates, because any good software will just be clever at finding other correlates.

Are there any alternatives to simply prohibiting price differentiation based on personal attributes (just like Obamacare does, albeit with certain exceptions!)

p.s. we explored this general topic at the first LSC Tech Summit almost two decades ago.  It is a pity that we have not continued this process.  See paper here.



About richardzorza

I am deeply involved in access to justice and the patient voice movement.
This entry was posted in Discrimination, Poverty, Technology. Bookmark the permalink.

One Response to The Nighmare of Website Bias — Lack of Specific Intent, And Hard to Prevent

  1. richardzorza says:

    This Comment is from Claudia Johnson:

    This is an example of how an algorithm can negatively impact a whole community.

    In this case—google translate was throught its algorithm—inserting “illegal alien” when that was not the term used in the original language. In this case, the result of the translations—was innacurate and offensive to a large community. The issue was raised by multiple groups—and it turned out that the way the seemingly neutral algorithm was constructed lead to this innacurate and offensive word choice.

    So—yes—a seemingly neutral algorithm can end up yielding results that are not only innacurate, but also offensive, and maybe even political in nature. To get the group to review the algorithm and the inferences that are being drawn from the results—takes a lot of time, work, and requires that the group behind the algorithm really to care about accuracy and feedback. That won’t always be the case.

    Moreover—an algorithm—because it is a mathematical formula written in a very specific language—is not easily understood by non-mathematicians—so it has a “hue” of mystery that would embed it with a sense of legitimacy—because it is wrapped in math and the emerging data sciences.

    For me—the worst part about using algorithms to make resource/opportunity allocation opportunities is that algorithms will miss the exceptional case, and the exceptional individual. There will be times when there is a person, a case that beats all odds. What some call a Black swan event—something where the probability of it happening is so small—yet the impact is so high that is completely impactful. Since the probability is so small—the algorithm/formula will ignore it and recommend that resources or assistance not be granted to that case or person—because it is highly unlikely that person/case will succeed. Once in a while the amazing, unexpected does happen. Black swans do occur—it is just hard for us to wrap our minds around them.

    So you are correct that algorithms would end up locking in place and perpetuating, cultural, social, and class biases in our society in favor of those with power and influence—further reducing upward mobility and worse yet—completely taking away the opportunity for those exceptional individuals or cases/situations to be given the resources and chance to live the American dream, to be all they can be and make our society better.

Comments are closed.