The Bias in What We Build

I’ve been thinking a lot lately about our biases and their influence on what we build and how. We’re all biased in some way—it’s an inevitable side-effect of living. We experience certain things, we live in a certain environment, we have certain interactions and over time all of these experiences and factors add up to impact the way we view ourselves and the way we view others.

These biases come into play over and over again in our work, and can have devastating consequences.

There was an interesting post on The Coral Project about anonymity and its impact—or rather, non-impact—on online behavior. A frequent refrain heard when we try to understand why online behavior is so frequently so poor is that the ability to be anonymous is one of the primary reasons for the problem. J. Nathan Matias argues differently, though:

Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment.

We need to change our entire approach to the question. Our concerns about anonymity are overly-simplistic; system design can’t solve social problems without actual social change.

While the article cites a little bit of research questioning our assumptions about anonymity online, the bulk of the article is focused on reframing our perspective of the discussion. We often consider the question of bad behavior online from the perspective of the people misbehaving. What is it that makes them feel free to be so much more vindictive in an online setting? Matias instead builds his case by focusing on the victims of this behavior.

Revealing personal information exposes people to greater levels of harassment and discrimination. While there is no conclusive evidence that displaying names and identities will reliably reduce social problems, many studies have documented the problems it creates. When people’s names and photos are shown on a platform, people who provide a service to them – drivers, hosts, buyers – reject transactions from people of color and charge them more. Revealing marital status on DonorsChoose caused donors give less to students with women teachers, in fields where women were a minority. Gender- and race-based harassment are only possible if people know a person’s gender and/or race, and real names often give strong indications around both of these categories. Requiring people to disclose that information forces those risks upon them.

[…]

[O]ne thing we can state about removing anonymity is that it increases the risk for people on the receiving end of online harassment.

Removing anonymity online, then, is yet another example of how we reflect our own biases in the decisions we make and the things we build.

It is our biases that lead us to overlook accessibility or how an application performs on a low-powered device or spotty network.

It is our biases that lead us to develop algorithms that struggle to recognize women’s voices or show more high-paying executive jobs to men than women.

And it is our biases that lead us to frame the problem of online behavior from that of the attacker, leading to solutions that are dangerous for the people on the receiving end of that harassment.

In each of these situations, our biases don’t just lead us to build something that is hard to use; they cause us to actively, if unintentionally, exclude entire groups of people.