Resiliency

NaN is contagious - Robust Client-Side JavaScript

NaN is a dangerous beast. NaN is a special value that means “not a number”, but in fact it is a number you can calculate with.

NaN is contagious. All calculations involving NaN fail silently, yielding NaN: 5 + NaN makes NaN, Math.sqrt(NaN) produces NaN. All comparisons with NaN yield false: 5 > NaN is false, 5 < NaN is also false. 5 === NaN is false, NaN === NaN is also false.

If a NaN slips into your logic, it is carried through the rest of the program until the user sees a “NaN” appearing in the interface. It is hard to find the cause of a NaN since the place where it appears can be far from the place that caused it. Typically, the cause of a NaN is an implicit type conversion. My advice is to raise the alarm as soon as you see a NaN.

Quoted content by Mathias Schäfer is licensed under CC BY-SA. See the other snippets from Robust Client-Side JavaScript.

Failing fast - Robust Client-Side JavaScript

Every computer program may have logic bugs: A case is not considered, the state is changed incorrectly, data is transformed wrongly, input is not handled. These bugs can have several consequences in JavaScript:

In the best case the script fails with an exception. You may wonder, why is that the best case? Because an exception is visible and easy to report. The line of code that threw an exception is likely not the root cause, but the cause is somewhere in the call stack. An exception is a good starting point for debugging.

In the worst case the application continues to run despite the error, but some parts of the interface are broken. Sometimes the user gets stuck. Sometimes data gets lost or corrupted permanently.

JavaScript code should fail fast (PDF) to make errors visible. Failing early with an exception, even with a user-facing error, is better than failing silently with undefined, puzzling behavior.

Unfortunately, JavaScript does not follow the principle of failing fast. JavaScript is a weakly typed language that goes great lengths to not fail with an error.

[…]

The key to failing fast is to make your assumptions explicit with assertions.

Quoted content by Mathias Schäfer is licensed under CC BY-SA. See the other snippets from Robust Client-Side JavaScript.

The browser as a runtime environment - Robust Client-Side JavaScript

There are several relevant browsers in numerous versions running on different operating systems on devices with different hardware abilities, internet connectivity, etc. The fact that the web client is not under their control maddens developers from other domains. They see the web as the most hostile software runtime environment. They understand the diversity of web clients as a weakness.

Proponents of the web counter that this heterogeneity and inconsistency is in fact a strength of the web. The web is open, it is everywhere, it has a low access threshold. The web is adaptive and keeps on absorbing new technologies and fields of applications. No other software environment so far has demonstrated this degree of flexibility.

Quoted content by Mathias Schäfer is licensed under CC BY-SA. See the other snippets from Robust Client-Side JavaScript.

How many people have JavaScript disabled or unavailable?

In 2013, the UK’s Government Digital Service did an experiment to determine how many of their users weren’t receiving JavaScript. You might assume, as I did years ago, that the only two options were either that users would have JavaScript enabled, or have disabled it explicitly via their browser’s settings or an addon. Unfortunately, those aren’t the only two options. The majority of users that don’t receive JavaScript likely don’t do so by choice.

They found that 1.1% of users didn’t execute their JavaScript test. Of those, only 0.2% either explicitly disabled JavaScript or used a browser that didn’t support it. The remaining 0.9% did not indicate they didn’t support JavaScript, but were not executing the test.

‘noscript’ tags will only be followed by browsers that explicitly have JavaScript disabled or don’t support JavaScript at all. So a significant number of people had a JavaScript enabled browser but still didn’t run the scripts successfully.

It’s hard to know exactly why these browsers didn’t run the JavaScript, but a number of possible reasons are:

  • corporate or local blocking or stripping of JavaScript elements
  • existing JavaScript errors in the browser (ie from browser add-ons, toolbars etc)
  • page being left between requesting the base image and the script/noscript image
  • browsers that pre-load pages they incorrectly predict you will visit
  • network errors, especially on mobile devices
  • any undoubtedly many more I haven’t even thought about…

The takeaway? Do not assume that everyone has JavaScript, or will have it execute reliably even if they haven’t disabled it.

Alex Feyerke: Step Off This Hurtling Machine | JSConf.au 2014

I thought this was an excellent talk on the hard questions we should be asking ourselves as developers. Why do most people use closed, proprietary systems and devices, if the open web is so wonderful? Even as developers, we still use them ourselves, and depend on them. How can we be more empathetic to what the average user needs and wants? How can we lock open the web, so the future isn’t entirely dependent on huge corporations and services, which is where we seem to be heading?

The problems with feature detection

The principle of feature detection is really simple. Before you use a particular API you test if it is actually available. If it is not, you can provide an alternative or fail gracefully. Why is this necessary? Well, unlike HTML and CSS, JavaScript can be very unforgiving. If you would use an API without actually testing for its existence and assume it just works you risk that your script will simply throw an error and die when it tries to call the API.

Take the following example:

Code language: JavaScript

if (navigator.geolocation) {
  navigator.geolocation.getCurrentPosition(function(pos) {
    alert('You are at: ' + pos.coords.latitude + ', ' + pos.coords.longitude);
  });
}

Before we call the getCurrentPosition() function, we actually check if the Geolocation API is available. This is a pattern we see again and again with feature detection.

If you look carefully you will notice that we don’t actually test if the getCurrentPosition() function is available. We assume it is, because navigator.geolocation exists. But is there actually a guarantee? No.

[…]

Cutting the mustard

There is another principle that has gotten very popular lately. By using some very specific feature tests you can make a distinction between old legacy browsers and modern browsers.

Code language: JavaScript

if ('querySelector' in document
  && 'localStorage' in window
  && 'addEventListener' in window) 
{
  // bootstrap the javascript application
}

In itself it is a perfectly valid way make sure the browser has a certain level of standards support. But at the same time also dangerous, because supporting querySelector, localStorage and addEventListener doesn’t say anything about supporting other standards.

Even if the browser passes the test, you really still need to do proper feature detection for each and every API you are depending on.

[…]

There are features where the whole premise of feature detection just fails horribly. Some browsers ship features that are so broken that they do not work at all. Sometimes it is a bug, and sometimes it is just pure laziness or incompetence. That may sound harsh, but I’m sure you agree with me at the end of this article.

The most benign variants are simply bugs. Everybody ships bugs. And the good browsers quickly fix them. Take for example Opera 18 which did have the API for Web Notifications, but crashed when you tried to use it. Blink, the rendering engine, actually supported the API, but Opera did not have a proper back-end implementation. And unfortunately this feature got enabled by mistake. I reported it and it was fixed in Opera 19. These things happen.

Introducing Resilient Web Design

I wrote a thing. The thing is a book. But the book is not published on paper. This book is on the web. It’s a web book. Or “wook” if you prefer …please don’t prefer. Here it is:

Resilient Web Design.

It’s yours for free.

Much of the subject matter will be familiar if you’ve seen my conference talks from the past couple of years, particularly Enhance! and Resilience. But the book ended up taking some twists and turns that surprised me. It turned out to be a bit of a history book: the history of design, the history of the web.

Resilient Web Design is a short book. It’s between sixteen and seventeen megawords long. You could read the whole thing in a couple of hours. Or—because the book has seven chapters—you could take fifteen to twenty minutes out of a day to read one chapter and you’d have read the whole thing done in a week.

If you make websites in any capacity, I hope that this book will resonate with you. Even if you don’t make websites, I still hope there’s an interesting story in there for you.

You can read the whole book on the web, but if you’d rather have a single file to carry around, I’ve made some PDFs as well: one in portrait, one in landscape.

I’ve licensed the book quite liberally. It’s released under a Creative Commons attribution share-alike licence. That means you can re-use the material in any way you want (even commercial usage) as long as you provide some attribution and use the same licence. So if you’d like to release the book in some other format like ePub or anything, go for it.

I’m currently making an audio version of Resilient Web Design. I’ll be releasing it one chapter at a time as a podcast. Here’s the RSS feed if you want to subscribe to it. Or you can subscribe directly from iTunes.

I took my sweet time writing this book. I wrote the first chapter in March 2015. I wrote the last chapter in May 2016. Then I sat on it for a while, figuring out what to do with it. Eventually I decided to just put the whole thing up on the web—it seems fitting.

Whereas the writing took over a year of solid procrastination, making the website went surprisingly quickly. After one weekend of marking up and styling, I had most of it ready to go. Then I spent a while tweaking. The source files are on Github.

I’m pretty happy with the end result. I’ll write a bit more about some of the details over the next while—the typography, the offline functionality, print styles, and stuff like that. In the meantime, I hope you’ll peruse this little book at your leisure…

Resilient Web Design.

If you like it, please spread the word.

More Proof We Don't Control Our Web Pages

I’ve talked about this before: As web designers, we can’t trust the network. Sure, we have to contend with mobile data “dead zones” and dropped connections as our users move about throughout the day, but there’s a lot more to the network that’s beyond our control.

Here’s a roundup of some of my “favorite” network issue related headlines from the last few years:

Some of these issues can be avoided by serving content over HTTPS, but that still won’t enable you to bypass things like firewall blacklists (which led to the jQuery outage on Sky). Your best bet is to design defensively and make sure your users can still accomplish their goals on your site when some resources are missing or markup is altered.

We can’t control what happens to us in this world, we can only control our reaction to it.