Algorithmic fairness

Alex Reinhart – Updated May 7, 2022 notebooks ·

Algorithmic fairness conventionally refers to the use of algorithms to make decisions about people: the bias that may arise when data is used to decide who gets loans, who gets jobs, who gets bail, and so on. Fairness in these applications is seen as an ethical requirement, and the challenge is adequately defining “fairness” and finding methods to build models satisfying those definitions.

But I think many of the same fairness concerns can apply to much more mundane algorithms. For instance, algorithmic feed filtering on sites like reddit and Twitter results in some posts being more widely viewed than others; by optimizing for user engagement, these algorithms reward content that stimulates certain kind of engagement; and because different political opinions tend to stimulate different kinds of engagement, this naturally promotes certain political views. Or, in other words, the medium influences the message, even if the designers of the medium (software engineers and data scientists) have no political motive whatsoever, or indeed any awareness that their work has such effects.

That’s not to say that Twitter poses the same ethics problems as an algorithm making bail decisions, just that some of the same tools for defining and studying fairness may apply to it.

See also Privacy, Algorithmic due process, Machine learning and law, Predicting recidivism.

Fairness in content promotion