Progressive enhancement

Serving old browsers limited JavaScript and CSS

The Guardian website viewed with Internet Explorer 8: a very basic document with little to no CSS applied.
The Guardian navigation as seen in Internet Explorer 8. Unsophisticated yet functional.
nature.com as viewed with Internet Explorer 9: a very simple layout without much CSS applied.
The nature.com homepage as seen in Internet Explorer 9.

Since we’re still stuck with a small percentage of users still on various versions of Internet Explorer and other older browsers, a good way to deal with those seems to be to only serve most or all of our JavaScript and CSS to browsers that cut the mustard, leaving the older set with a basic but functional experience, without risk that our newer, shiny stuff will inevitably break, or the need for polyfills that may or may not work.

See also: Cutting the mustard with only CSS

1% or 13 million JavaScript requests per month to BuzzFeed time out

More evidence that we don’t fully control our web pages and that a non-zero number of page views don’t execute JavaScript fully or correctly, despite it being enabled.

Says @ianfeather at #AllDayHey — “our monitoring tells us that around 1% of requests for JavaScript on BuzzFeed timeout. That’s around 13 million requests per month.” A reminder if one were needed that we should design for resilience

Duck typing - Robust Client-Side JavaScript

As a weakly typed language, JavaScript performs implicit type conversion so developers do not need to think much about types. The concept behind this is called duck typing: “If it walks like a duck and quacks like a duck, it is a duck.”

typeof and instanceof check what a value is and where it comes from. As we have seen, both operators have serious limitations.

In contrast, duck typing checks what a value does and provides. After all, you are not interested in the type of a value, you are interested in what you can do with the value.

[…]

Duck typing would ask instead: What does the function do with the value? Then check whether the value fulfills the needs, and be done with it.

[…]

This check is not as strict as instanceof, and that is an advantage. A function that does not assert types but object capabilities is more flexible.

For example, JavaScript has several types that do not inherit from Array.prototype but walk and talk like arrays: Arguments, HTMLCollection and NodeList. A function that uses duck typing is able to support all array-like types.

Quoted content by Mathias Schäfer is licensed under CC BY-SA. See the other snippets from Robust Client-Side JavaScript.

How many people have JavaScript disabled or unavailable?

In 2013, the UK’s Government Digital Service did an experiment to determine how many of their users weren’t receiving JavaScript. You might assume, as I did years ago, that the only two options were either that users would have JavaScript enabled, or have disabled it explicitly via their browser’s settings or an addon. Unfortunately, those aren’t the only two options. The majority of users that don’t receive JavaScript likely don’t do so by choice.

They found that 1.1% of users didn’t execute their JavaScript test. Of those, only 0.2% either explicitly disabled JavaScript or used a browser that didn’t support it. The remaining 0.9% did not indicate they didn’t support JavaScript, but were not executing the test.

‘noscript’ tags will only be followed by browsers that explicitly have JavaScript disabled or don’t support JavaScript at all. So a significant number of people had a JavaScript enabled browser but still didn’t run the scripts successfully.

It’s hard to know exactly why these browsers didn’t run the JavaScript, but a number of possible reasons are:

  • corporate or local blocking or stripping of JavaScript elements
  • existing JavaScript errors in the browser (ie from browser add-ons, toolbars etc)
  • page being left between requesting the base image and the script/noscript image
  • browsers that pre-load pages they incorrectly predict you will visit
  • network errors, especially on mobile devices
  • any undoubtedly many more I haven’t even thought about…

The takeaway? Do not assume that everyone has JavaScript, or will have it execute reliably even if they haven’t disabled it.

Old browsers are not your fault, but they are your responsibility

The fact old browsers exist is not your fault. Don’t start these discussions by acting as if it is your failing that you can’t get the site looking identical in all browsers released in the last 10 years, while using technology only released this year. It’s not your fault, but it is your problem. It is your problem, your responsibility as a web professional to get yourself into a position where you can take the right course of action for each project.

Does Google execute JavaScript?

Yet another reason to not assume your JavaScript will always run, or if it does run, that it will run in its entirety:

I’m told: Yes, it’s 2016; of course Google executes JavaScript.

But I’m also told: Server-side rendering is necessary for SEO.

If Google can run JavaScript and thereby render client-side views, why is server-side rendering necessary for SEO? Okay, Google isn’t the only search engine, but it’s obviously an important one to optimize for.

Recently I ran a simple experiment to see to what extent the Google crawler understands dynamic content. I set up a web page at doesgoogleexecutejavascript.com which does the following:

  1. The HTML from the server contains text which says “Google does not execute JavaScript.”
  2. There is some inline JavaScript on the page that changes the text to “Google executes JavaScript, but only if it is embedded in the document.”
  3. The HTML also links to a script which, when loaded, changes the text to “Google executes JavaScript, even if the script is fetched from the network. However, Google does not make AJAX requests.”
  4. That script makes an AJAX request and updates the text with the response from the server. The server returns the message “Google executes JavaScript and even makes AJAX requests.”

After I launched this page, I linked to it from its GitHub repository and waited for Google to discover it.

[…]

It seems Google is not guaranteed to run your JavaScript automatically. You may have to manually trigger a crawl. And, even then, Google apparently won’t do any AJAX requests your page may depend on, or at least it didn’t in my case.

[…]

My conclusion is: Google may or may not decide to run your JavaScript, and you don’t want your business to depend on its particular inclination of the day. Do server-side/universal/isomorphic rendering just to be safe.

The problems with feature detection

The principle of feature detection is really simple. Before you use a particular API you test if it is actually available. If it is not, you can provide an alternative or fail gracefully. Why is this necessary? Well, unlike HTML and CSS, JavaScript can be very unforgiving. If you would use an API without actually testing for its existence and assume it just works you risk that your script will simply throw an error and die when it tries to call the API.

Take the following example:

Code language: JavaScript

if (navigator.geolocation) {
  navigator.geolocation.getCurrentPosition(function(pos) {
    alert('You are at: ' + pos.coords.latitude + ', ' + pos.coords.longitude);
  });
}

Before we call the getCurrentPosition() function, we actually check if the Geolocation API is available. This is a pattern we see again and again with feature detection.

If you look carefully you will notice that we don’t actually test if the getCurrentPosition() function is available. We assume it is, because navigator.geolocation exists. But is there actually a guarantee? No.

[…]

Cutting the mustard

There is another principle that has gotten very popular lately. By using some very specific feature tests you can make a distinction between old legacy browsers and modern browsers.

Code language: JavaScript

if ('querySelector' in document
  && 'localStorage' in window
  && 'addEventListener' in window) 
{
  // bootstrap the javascript application
}

In itself it is a perfectly valid way make sure the browser has a certain level of standards support. But at the same time also dangerous, because supporting querySelector, localStorage and addEventListener doesn’t say anything about supporting other standards.

Even if the browser passes the test, you really still need to do proper feature detection for each and every API you are depending on.

[…]

There are features where the whole premise of feature detection just fails horribly. Some browsers ship features that are so broken that they do not work at all. Sometimes it is a bug, and sometimes it is just pure laziness or incompetence. That may sound harsh, but I’m sure you agree with me at the end of this article.

The most benign variants are simply bugs. Everybody ships bugs. And the good browsers quickly fix them. Take for example Opera 18 which did have the API for Web Notifications, but crashed when you tried to use it. Blink, the rendering engine, actually supported the API, but Opera did not have a proper back-end implementation. And unfortunately this feature got enabled by mistake. I reported it and it was fixed in Opera 19. These things happen.

Introducing Resilient Web Design

I wrote a thing. The thing is a book. But the book is not published on paper. This book is on the web. It’s a web book. Or “wook” if you prefer …please don’t prefer. Here it is:

Resilient Web Design.

It’s yours for free.

Much of the subject matter will be familiar if you’ve seen my conference talks from the past couple of years, particularly Enhance! and Resilience. But the book ended up taking some twists and turns that surprised me. It turned out to be a bit of a history book: the history of design, the history of the web.

Resilient Web Design is a short book. It’s between sixteen and seventeen megawords long. You could read the whole thing in a couple of hours. Or—because the book has seven chapters—you could take fifteen to twenty minutes out of a day to read one chapter and you’d have read the whole thing done in a week.

If you make websites in any capacity, I hope that this book will resonate with you. Even if you don’t make websites, I still hope there’s an interesting story in there for you.

You can read the whole book on the web, but if you’d rather have a single file to carry around, I’ve made some PDFs as well: one in portrait, one in landscape.

I’ve licensed the book quite liberally. It’s released under a Creative Commons attribution share-alike licence. That means you can re-use the material in any way you want (even commercial usage) as long as you provide some attribution and use the same licence. So if you’d like to release the book in some other format like ePub or anything, go for it.

I’m currently making an audio version of Resilient Web Design. I’ll be releasing it one chapter at a time as a podcast. Here’s the RSS feed if you want to subscribe to it. Or you can subscribe directly from iTunes.

I took my sweet time writing this book. I wrote the first chapter in March 2015. I wrote the last chapter in May 2016. Then I sat on it for a while, figuring out what to do with it. Eventually I decided to just put the whole thing up on the web—it seems fitting.

Whereas the writing took over a year of solid procrastination, making the website went surprisingly quickly. After one weekend of marking up and styling, I had most of it ready to go. Then I spent a while tweaking. The source files are on Github.

I’m pretty happy with the end result. I’ll write a bit more about some of the details over the next while—the typography, the offline functionality, print styles, and stuff like that. In the meantime, I hope you’ll peruse this little book at your leisure…

Resilient Web Design.

If you like it, please spread the word.

More Proof We Don't Control Our Web Pages

I’ve talked about this before: As web designers, we can’t trust the network. Sure, we have to contend with mobile data “dead zones” and dropped connections as our users move about throughout the day, but there’s a lot more to the network that’s beyond our control.

Here’s a roundup of some of my “favorite” network issue related headlines from the last few years:

Some of these issues can be avoided by serving content over HTTPS, but that still won’t enable you to bypass things like firewall blacklists (which led to the jQuery outage on Sky). Your best bet is to design defensively and make sure your users can still accomplish their goals on your site when some resources are missing or markup is altered.

We can’t control what happens to us in this world, we can only control our reaction to it.