Back in the early civil rights days, the strongest argument against effective civil rights enforcement was the claim that employment discriminators, for example, were merely following the demands of the market when they hired the most “appealing” staff. (For a recent general example (not apparently using that word), see here.)
The insidiousness of these risks, and how deeply engrained they are in a market economy is highlighted in the recent New York Times article by Claire Caine Miller. It is called “When Algorithms Discriminate.” My question is this — Can any really good profit oriented algorithm not discriminate, unless explicitly designed so it does not?”
As the article explains:
There is a widespread belief that software and algorithms that rely on data are objective. But software is not free of human influence. Algorithms are written and maintained by people, and machine learning algorithms adjust what they do based on people’s behavior. As a result, say researchers in computer science, ethics and law, algorithms can reinforce human prejudices.
Google’s online advertising system, for instance, showed an ad for high-income jobs to men much more often than it showed the ad to women, a new study by Carnegie Mellon University researchers found.
Research from Harvard University found that ads for arrest records were significantly more likely to show up on searches for distinctively black names or a historically black fraternity. The Federal Trade Commission said advertisers are able to target people who live in low-income neighborhoods with high-interest loans.
One explanation discussed in the article is that advertisers, for example, specifically ask for certain attributes, such as gender, zip code, etc, and that the “logical” result follows.
I would suggest a much deeper problem, that a well designed algorithm will inevitably discover and perpetuate bias. For example, a good potential employee search algorithm, it would seem, should look at employment successes, find the correlates, and bump up in the results those who are more likely to succeed in the current social political and economic environment. Similarly, a good pricing algorithm will look at things like how far people have to drive to a brick and mortar store in order to identify where the supply and demand curves interest for a particular potential buyer, and to do so will look at lots of information — all raising the price for those with less economic clout. It’s the global extension of low income area supermarkets charging more.
The Times piece concludes:
It would be impossible for humans to oversee every decision an algorithm makes. But companies can regularly run simulations to test the results of their algorithms. Mr. Datta suggested that algorithms “be designed from scratch to be aware of values and not discriminate.”
“The question of determining which kinds of biases we don’t want to tolerate is a policy one,” said Deirdre Mulligan, who studies these issues at the University of California, Berkeley School of Information. “It requires a lot of care and thinking about the ways we compose these technical systems.”
Silicon Valley, however, is known for pushing out new products without necessarily considering the societal or ethical implications. “There’s a huge rush to innovate,” Ms. Mulligan said, “a desire to release early and often — and then do cleanup.”
I have to say that I do not think that it is enough to “design from scratch and not discriminate,” because it is about more than avoiding the explicit use of prohibited factors, or even identified correlates, because any good software will just be clever at finding other correlates.
Are there any alternatives to simply prohibiting price differentiation based on personal attributes (just like Obamacare does, albeit with certain exceptions!)
p.s. we explored this general topic at the first LSC Tech Summit almost two decades ago. It is a pity that we have not continued this process. See paper here.