Ethics

Users Don't Hate Change. They Hate Our Design Choices.

For years, we studied teams rolling out new designs, to see if we could mitigate negative reaction to new releases and design changes. We studied hundreds of product and service rollouts. We watched and learned from the reactions of thousands of users.

When we dug into what those users’ reactions [were], patterns emerged. The users told us the changes inconvenienced them. They had no idea the change was coming and suddenly it was in their face. Users were upset because they were surprised.

They also told us the old version worked fine. Even when it took a while to get comfortable, they learned it. Many users mastered difficult-to-use designs.

Everything was different when the new version arrived. What they’d mastered before didn’t help them now. The company said it was an improved design, but they couldn’t see the improvements. Why should these users learn something new that doesn’t help them? Users were upset because they couldn’t see the value.

We also saw many instances where users didn’t react negatively to changes. Often, they didn’t react at all. We saw new designs that didn’t affect the users’ behaviors and they didn’t pay attention to it.

In these cases, the changes were often not noticeable. Sometimes the changes were small and isolated. Yet, we also saw users seemingly not notice several updates with extensive changes. (In more than one instance, an entire application’s infrastructure had been rewritten without a single user noticing.)

In cases when the design changes were noticeable, the designers gave the users control to switch when they wanted. The designers showed why the change was valuable to the users. And the designers made the transition easy by taking the knowledge and experience their users already had with the product into account.

You Pay (Or Maybe You Don't)

Andy and I have put a lot of work and thought into Every Layout. We want it to be the best resource it can be. Naturally, we would like to be paid for the value we are giving you. However, we know that not everyone can afford the things they’d like to have. We’re trying to address this in two ways:

  1. A large selection of free content, including all of the “rudiment” articles that cover the basics of (our take on) contemporary CSS
  2. An honor system, wherein you can claim to be eligible for the full Every Layout for free

What makes you eligible for (2)? If you are currently out of work, you are a full-time student or under 19 years old, you are trying to get your first job as a web developer or designer, or you are an unpaid volunteer for a charitable organization not involved in proselytism: consider yourself a match. Also, if you are the sole person of your ethnicity, gender, or sexual orientation at your company or in your local developer community: we will gift Every Layout to you. If you have a disability that makes accessing equivalent resources difficult, you can have this resource for free. We are trying to make it as accessible as possible.

[…]

It would be logistically and ethically implausible for us to vet, or otherwise judge, if you deserve to have Every Layout without a charge. If you believe you fit into the criteria above, let us know. That’s the honor part.

Are we suckers? Perhaps, to some. But while you alone decide if you qualify, we decide if you disqualify. That is, if we see you saying or sharing racist, homophobic, transphobic, sexist, or fascist sentiments, or you’re caught engaging in what we consider, in any way, punching down, you get fuck all for free from us.

Revisiting prefers-reduced-motion, the reduced motion media query

Two years ago, I wrote about prefers-reduced-motion, a media query introduced into Safari 10.1 to help people with vestibular and seizure disorders use the web. The article provided some background about the media query, why it was needed, and how to work with it to avoid creating disability-triggering visual effects.

The article was informed by other people’s excellent work, namely Orde Saunders’ post about user queries, and Val Head’s article on web animation motion sensitivity.

We’re now four months into 2019, and it makes me happy to report that we have support for the feature in all major desktop browsers! Safari was first, with Firefox being a close second. Chrome was a little late to the party, but introduced it as of version 74.

[…]

Reduce isn’t necessarily remove

We may not need to throw the baby out with the bathwater when it comes to using animation. Remember, it’s prefers-reduced-motion, not prefers-no-motion.

[…]

If the meaning of a component is diminished by removing its animation altogether, we could slow down and simplify the component’s animation to the point where the concept can be communicated without potentially being an accessibility trigger.

When algorithms surprise us

There’s an age-old problem with algorithms that can learn that some advocates don’t seem to fully grasp: without human ethics built in, the potential for harm can be enormous. Isaac Asimov was onto something with the three laws of robotics.

Shooting the moon: In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.

[…]

Something as apparently benign as a list-sorting algorithm could also solve problems in rather innocently sinister ways.

Well, it’s not unsorted: For example, there was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted.

Solving the Kobayashi Maru test: Another algorithm was supposed to minimize the difference between its own answers and the correct answers. It found where the answers were stored and deleted them, so it would get a perfect score.

How to win at tic-tac-toe: In another beautiful example, in 1997 some programmers built algorithms that could play tic-tac-toe remotely against each other on an infinitely large board. One programmer, rather than designing their algorithm’s strategy, let it evolve its own approach. Surprisingly, the algorithm suddenly began winning all its games. It turned out that the algorithm’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new greatly-expanded board, the huge gameboard would cause it to run out of memory and crash, forfeiting the game.

The unsettling intersection of children's videos and manipulating search rankings for profit

It turns out that there’s an industry out there making money by algorithmically mashing up kids videos in ways to manipulate search rankings, without much apparent care about how their content could potentially traumatize some children. This is well worth a read, if only as another piece of evidence that algorithms often do not have human ethics baked in and the unintended effects this can have.

A/B testing can't tell you why a change is better

I think this is a good example of the is-ought problem in philosophy, transplanted into the world of software development:

A/B testing is a great way of finding out what happens when you introduce a change. But it can’t tell you why.

The problem is that, in a data-driven environment, decisions ultimately come down to whether something works or not. But just because something works, doesn’t mean it’s a good thing.

If I were trying to convince you to buy a product, or use a service, one way I could accomplish that would be to literally put a gun to your head. It would work. Except it’s not exactly a good solution, is it? But if we were to judge by the numbers (100% of people threatened with a gun did what we wanted), it would appear to be the right solution.

Disqus is a performance and privacy nightmare

Relevant points [of disabling Disqus] are:

  • Load-time goes from 6 seconds to 2 seconds.
  • There are 105 network requests vs. 16.
  • There are a lot of non-relevant requests going through to networks that will be tracking your movements.

Among the networks you can find:

  • disqus.com - Obviously!
  • google-analytics.com - Multiple requests; no idea who’s capturing your movements.
  • connect.facebook.net - If you’re logged into Facebook, they know you visit this site.
  • accounts.google.com - Google will also map your visits to this site with any of your Google accounts.
  • pippio.com - LiveRamp identify mapping for harvesting your details for commercial gain.
  • bluekai.com - Identity tracking for marketing campaigns.
  • crwdcntrl.net - Pretty suspect site listed as referenced by viruses and spyware.
  • exelator.com - More identity and movement tracking site which even has a virus named after it!
  • doubleclick.net - We all know this one: ad services and movement tracking, owned by Google.
  • tag.apxlv.net - Very shady and tricky to pin-point an owner as they obsfuscate their domain (I didn’t even know this was a thing!). Adds a tracking pixel to your site.
  • adnxs.com - More tracking garbage, albeit slightly more prolific.
  • adsymptotic.com - Advertising and tracking that suppposedly uses machine learning.
  • rlcdn.com - Obsfuscated advertising/tracking from Rapleaf.
  • adbrn.com - “Deliver a personalized customer journey across devices, channels and platforms with Adbrain customer ID mapping technology.”
  • nexac.com - Oracle’s Datalogix, their own tracking and behavioural pattern rubbish.
  • tapad.com - OK, I cant’t be bothered to search to look this up anymore.
  • liadm.com - More? Oh, ok, then…
  • sohern.com - Yup. Tracking.
  • demdex.net - Tracking. From Adobe.
  • bidswitch.net - I’ll give you one guess…
  • agkn.com - …
  • mathtag.com - Curious name, maybe it’s… no. It’s tracking you.

I can’t visit many of these sites because I have them blocked in uBlock Origin so information was gleaned from google crawl results of the webpages and 3rd parties. Needless to say, it’s a pretty disgusting insight into how certain free products turn you into the product. What’s more worrying are the services that go to lengths to hide who they are and what their purposes are for tracking your movements.

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we’re good people. We like freedom. How could we have built tools that subvert it?

[…]

The learning algorithms have no ethics or boundaries. There’s no slot in the algorithm that says “insert moral compass here”, or any way to tell them that certain inferences are forbidden because they would be wrong. In applying them to human beings, we leave ourselves open to unpleasant surprises.

The issue is not just intentional abuse (by trainers feeding skewed data into algorithms to affect the outcome), or unexamined bias that creeps in with in our training data, but the fundamental non-humanity of these algorithms.

[…]

So what happens when these tools for maximizing clicks and engagement creep into the political sphere?

This is a delicate question! If you concede that they work just as well for politics as for commerce, you’re inviting government oversight. If you claim they don’t work well at all, you’re telling advertisers they’re wasting their money.

Facebook and Google have tied themselves into pretzels over this. The idea that these mechanisms of persuasion could be politically useful, and especially that they might be more useful to one side than the other, violates cherished beliefs about the “apolitical” tech industry.

[…]

One problem is that any system trying to maximize engagement will try to push users towards the fringes. You can prove this to yourself by opening YouTube in an incognito browser (so that you start with a blank slate), and clicking recommended links on any video with political content. When I tried this experiment last night, within five clicks I went from a news item about demonstrators clashing in Berkeley to a conspiracy site claiming Trump was planning WWIII with North Korea, and another exposing FEMA’s plans for genocide.

This pull to the fringes doesn’t happen if you click on a cute animal story. In that case, you just get more cute animals (an experiment I also recommend trying). But the algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behavior into the algorithm; it made a correct observation about human nature and acted on it.

[…]

But even though we’re likely to fail, all we can do is try. Good intentions are not going to make these structural problems go away. Talking about them is not going to fix them.

We have to do something.

Information Literacy Is a Design Problem

Every decision we make influences how information is presented in the world. Every presentation adds to the pattern. No matter how innocuous our organization, how lowly our title, how small our user base—every single one of us contributes, a little bit, to the way information is perceived.

Are we changing it for the better?

While it’s always been crucial to act ethically in the building of the web, our cultural climate now requires dedicated, individual conscientiousness. It’s not enough to think ourselves neutral, to dismiss our work as meaningless or apolitical. Everything is political. Every action, and every inaction, has an impact.