Ethics

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we’re good people. We like freedom. How could we have built tools that subvert it?

[…]

The learning algorithms have no ethics or boundaries. There’s no slot in the algorithm that says “insert moral compass here”, or any way to tell them that certain inferences are forbidden because they would be wrong. In applying them to human beings, we leave ourselves open to unpleasant surprises.

The issue is not just intentional abuse (by trainers feeding skewed data into algorithms to affect the outcome), or unexamined bias that creeps in with in our training data, but the fundamental non-humanity of these algorithms.

[…]

So what happens when these tools for maximizing clicks and engagement creep into the political sphere?

This is a delicate question! If you concede that they work just as well for politics as for commerce, you’re inviting government oversight. If you claim they don’t work well at all, you’re telling advertisers they’re wasting their money.

Facebook and Google have tied themselves into pretzels over this. The idea that these mechanisms of persuasion could be politically useful, and especially that they might be more useful to one side than the other, violates cherished beliefs about the “apolitical” tech industry.

[…]

One problem is that any system trying to maximize engagement will try to push users towards the fringes. You can prove this to yourself by opening YouTube in an incognito browser (so that you start with a blank slate), and clicking recommended links on any video with political content. When I tried this experiment last night, within five clicks I went from a news item about demonstrators clashing in Berkeley to a conspiracy site claiming Trump was planning WWIII with North Korea, and another exposing FEMA’s plans for genocide.

This pull to the fringes doesn’t happen if you click on a cute animal story. In that case, you just get more cute animals (an experiment I also recommend trying). But the algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behavior into the algorithm; it made a correct observation about human nature and acted on it.

[…]

But even though we’re likely to fail, all we can do is try. Good intentions are not going to make these structural problems go away. Talking about them is not going to fix them.

We have to do something.

Information Literacy Is a Design Problem

Every decision we make influences how information is presented in the world. Every presentation adds to the pattern. No matter how innocuous our organization, how lowly our title, how small our user base—every single one of us contributes, a little bit, to the way information is perceived.

Are we changing it for the better?

While it’s always been crucial to act ethically in the building of the web, our cultural climate now requires dedicated, individual conscientiousness. It’s not enough to think ourselves neutral, to dismiss our work as meaningless or apolitical. Everything is political. Every action, and every inaction, has an impact.

The Reduced Motion Media Query

Sites all too often inundate their audiences with automatically playing, battery-draining, resource-hogging animations. The need for people being able to take back control of animations might be more prevalent than you may initially think.

[…]

Remember: we’re all just temporarily-abled. Feeling a little dizzy might not seem like that big a deal, but that moment of nausea might be a critical one: losing balance and falling down, a migraine during an interview, nausea-triggered vomiting while working a food service job, passing out while operating a car UI, etc.

So what can we do about it?

Enter a new Media Query

Safari 10.1 introduces the Reduced Motion Media Query. It is a non-vendor-prefixed declaration that allows developers to “create styles that avoid large areas of motion for users that specify a preference for reduced motion in System Preferences.”

The syntax is pretty straightforward:

Code language: CSS

@media (prefers-reduced-motion: reduce) {
  .background {
    animation: none;
  }
}

Safari will parse this code and apply it to your site, letting you provide an alternative experience for users who have the Reduced Motion option enabled. Think of this new Media Query like @supports: describe the initial appearance, then modify the styles based on capability.

People's Names That Break Websites

The intersection of rushed (or careless) development and unintended consequences:

We’re doing a story about people that have names that websites and computers don’t seem to like - for example, we spoke to a guy named William Test, and a woman named Katie Test, both of whom can’t seem to keep a hotel or airplane booking because the name “test” is flagged by internal systems.

We also spoke to a guy named Christopher Null who had the same problem, and woman named Joan Fread, who can’t use paypal because her last name is the same as a PHP command.

I’m curious if there’s anyone in the dev community that is thinking about this, and how to deal with it. Is it even considered a problem? Is the population that this affects so small that people don’t even think about it?

The Bias in What We Build

I’ve been thinking a lot lately about our biases and their influence on what we build and how. We’re all biased in some way—it’s an inevitable side-effect of living. We experience certain things, we live in a certain environment, we have certain interactions and over time all of these experiences and factors add up to impact the way we view ourselves and the way we view others.

These biases come into play over and over again in our work, and can have devastating consequences.

There was an interesting post on The Coral Project about anonymity and its impact—or rather, non-impact—on online behavior. A frequent refrain heard when we try to understand why online behavior is so frequently so poor is that the ability to be anonymous is one of the primary reasons for the problem. J. Nathan Matias argues differently, though:

Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment.

We need to change our entire approach to the question. Our concerns about anonymity are overly-simplistic; system design can’t solve social problems without actual social change.

While the article cites a little bit of research questioning our assumptions about anonymity online, the bulk of the article is focused on reframing our perspective of the discussion. We often consider the question of bad behavior online from the perspective of the people misbehaving. What is it that makes them feel free to be so much more vindictive in an online setting? Matias instead builds his case by focusing on the victims of this behavior.

Revealing personal information exposes people to greater levels of harassment and discrimination. While there is no conclusive evidence that displaying names and identities will reliably reduce social problems, many studies have documented the problems it creates. When people’s names and photos are shown on a platform, people who provide a service to them – drivers, hosts, buyers – reject transactions from people of color and charge them more. Revealing marital status on DonorsChoose caused donors give less to students with women teachers, in fields where women were a minority. Gender- and race-based harassment are only possible if people know a person’s gender and/or race, and real names often give strong indications around both of these categories. Requiring people to disclose that information forces those risks upon them.

[…]

[O]ne thing we can state about removing anonymity is that it increases the risk for people on the receiving end of online harassment.

Removing anonymity online, then, is yet another example of how we reflect our own biases in the decisions we make and the things we build.

It is our biases that lead us to overlook accessibility or how an application performs on a low-powered device or spotty network.

It is our biases that lead us to develop algorithms that struggle to recognize women’s voices or show more high-paying executive jobs to men than women.

And it is our biases that lead us to frame the problem of online behavior from that of the attacker, leading to solutions that are dangerous for the people on the receiving end of that harassment.

In each of these situations, our biases don’t just lead us to build something that is hard to use; they cause us to actively, if unintentionally, exclude entire groups of people.

Alex Feyerke: Step Off This Hurtling Machine | JSConf.au 2014

I thought this was an excellent talk on the hard questions we should be asking ourselves as developers. Why do most people use closed, proprietary systems and devices, if the open web is so wonderful? Even as developers, we still use them ourselves, and depend on them. How can we be more empathetic to what the average user needs and wants? How can we lock open the web, so the future isn’t entirely dependent on huge corporations and services, which is where we seem to be heading?