Resiliency

The Fallacies of Distributed Computing (Applied to Front-End Performance)

In the mid-nineties, Laurence Peter Deutsch and colleagues at Sun Microsystems devised a list of what they called The Fallacies of Distributed Computing. These were a list of common assumptions that developers working on distributed systems were prone to making; mistakes that would impact the reliability, security, or resilience of their software. Those fallacies are as follows:

  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn’t change.
  6. There is one administrator.
  7. Transport cost is zero.
  8. The network is homogeneous.

Reading over the eight fallacies listed out so plainly, they seem so obvious and clear that you’d struggle to believe that anyone would ever fall foul of them: of course we know bandwidth isn’t infinite! The thing is, these fallacies are obvious, but they don’t exist to teach us anything new; they exist to remind us of the fundamentals. Nor are they intended to explain or describe normal condition; they’re intended to remind us of worst case scenarios. They’re not saying that the network is always unreliable, or that latency is always high, or that bandwidth is always low: they’re saying that, sometimes, one or all of them will be sub-optimal. We should prepare for that.

Yet time and time again I see developers falling into the same old traps—making assumptions or overly-optimistic predictions about the conditions in which their apps will run. Developers frequently tell me things like most of our users are on wifi, or 4G is pretty much everywhere now, or people only ever visit the site from inside the office anyway. Even if this is statistically true—even if your analytics corroborate the claim—planning only for the best leaves you utterly unprepared for the worst. To paraphrase Jeremy, it’s not about how well it works, but how well it fails.

What if images don't arrive? A tale of a badly designed lazy loader

If you’re looking for an example of exactly what not to do in terms of front-end performance, I can’t think of a better one than this - they threw away a lot of the performance optimizations browsers give us for free in a bizarre attempt at improving page loading, which ended up doing the opposite:

I was recently conducting some exploratory work for a potential client when I hit upon a pretty severe flaw in a design decision they’d made: They’d built a responsive image lazyloader in JavaScript which, by design, worked by:

  1. immediately applying display: none; to the <body>;
  2. waiting until the very last of the page’s images had arrived;
  3. once they’d arrived, removing the display: none; and gradually fading the page into visibility.

Not only does this strike me as an unusual design decision—setting out to build a lazyloader and then having it intentionally block rendering—there had been no defensive strategy to answer the question: what if something goes wrong with image delivery?

‘Something wrong’ is exactly what happened. Due to an imperfect combination of:

  1. images being completely unoptimised, plus;
  2. a misconfiguration with their image transformation service leading to double downloads for all images;

…they’d managed to place 27.9MB of images onto the Critical Path. Almost 30MB of previously non-render blocking assets had just been turned into blocking ones on purpose with no escape hatch. Start render time was as high as 27.1s over a cable connection1.

If you’re going to build an image loader that hides the whole page until all images are ready, you must also ask yourself what if the images don’t arrive?

Rethinking Offline First sync for Service Workers

On the surface, Service Workers look quite similar to Web Workers. They both run on separate threads from the main UI thread, they have a global self object, and they tend to support the same APIs. However, while Web Workers offer a large degree of control over their lifecycle (you can create and terminate them at will) and are able to execute long-running JavaScript tasks (in fact, they’re designed for this), Service Workers explicitly don’t allow either of these things. In fact, a Service Worker is best thought of as “fire-and-forget” — it responds to events in an ephemeral way, and the browser is free to terminate any Service Worker that takes too long to fulfill a request or makes too much use of shared resources.

This led us to our first real hurdle with Service Worker. Our goal, as we originally conceived it, was to use PouchDB’s existing replication APIs to enable bi-directional sync between the client and the server, with the client code isolated entirely to the Service Worker.

[…]

This resulted in a silent error, which took quite a while to debug. The culprit? Well, PouchDB’s “live” sync depends on HTTP longpolling — in other words, it maintains an ongoing HTTP connection with the CouchDB server, which is used to send real-time updates from the server to the client. As it turns out, this is a big no-no in Service Worker Land, and the browser will unceremoniously drop your Service Worker if it tries to maintain any ongoing HTTP connections. The same applies to Web Sockets, Server-Sent Events, WebRTC, and any other network APIs where you may be tempted to keep a constant connection with your server.

What we realized is that “the Zen of Service Worker” is all about embracing events. The Service Worker receives events, it responds to events, and it (ideally) does so in a timely manner — if not, the browser may preemptively terminate it. And this is actually a good design decision in the spec, since it prevents malicious websites from installing rogue Service Workers that abuse the user’s battery, memory, or CPU.

Serving old browsers limited JavaScript and CSS

The Guardian website viewed with Internet Explorer 8: a very basic document with little to no CSS applied.
The Guardian navigation as seen in Internet Explorer 8. Unsophisticated yet functional.
nature.com as viewed with Internet Explorer 9: a very simple layout without much CSS applied.
The nature.com homepage as seen in Internet Explorer 9.

Since we’re still stuck with a small percentage of users still on various versions of Internet Explorer and other older browsers, a good way to deal with those seems to be to only serve most or all of our JavaScript and CSS to browsers that cut the mustard, leaving the older set with a basic but functional experience, without risk that our newer, shiny stuff will inevitably break, or the need for polyfills that may or may not work.

See also: Cutting the mustard with only CSS

1% or 13 million JavaScript requests per month to BuzzFeed time out

More evidence that we don’t fully control our web pages and that a non-zero number of page views don’t execute JavaScript fully or correctly, despite it being enabled.

Says @ianfeather at #AllDayHey — “our monitoring tells us that around 1% of requests for JavaScript on BuzzFeed timeout. That’s around 13 million requests per month.” A reminder if one were needed that we should design for resilience

Error logging - Robust Client-Side JavaScript

The standard approach is to monitor all exceptions on a page and to handle them centrally, for example using window.onerror. Then gather a bunch of context information and send an incident report to a log server. That server stores all reports, makes them accessible using an interface and probably sends an email to the developer.

Here is a simple global error reporter:

Code language: JavaScript

window.onerror = function(message, file, line, column, error) {
  var errorToReport = {
    type: error ? error.type : '',
    message: message,
    file: file,
    line: line,
    column: column,
    stack: error ? error.stack : '',
    userAgent: navigator.userAgent,
    href: location.href
  };
  var url = '/error-reporting?error=' +
    JSON.stringify(errorToReport);
  var image = new Image();
  image.src = url;
};

This code sends a report to the address /error-reporting using a GET request.

The example above is not enough. It is not that easy to compile a meaningful, cross-browser report from an exception. Tools like TraceKit and StackTrace.js help to extract meaning from exceptions.

Quoted content by Mathias Schäfer is licensed under CC BY-SA. See the other snippets from Robust Client-Side JavaScript.

Abstraction libraries - Robust Client-Side JavaScript

Every year or so, someone writes an article titled “You do not need jQuery” or “You do not need Lodash”. These articles point out that the native APIs have been improved since or old browsers that prevented the usage of native APIs have died out. That is right, but they often miss the other main goal of libraries.

Libraries provide a concise and consistent API that is an abstraction of several inconsistent browser APIs. For example, using jQuery for traversing and manipulating the DOM, handling events and animation is still more pleasant than using the respective native APIs. This is because jQuery provides an unbeaten abstraction: A list type containing DOM nodes with powerful map, reduce and filter operations. Also, jQuery still deals with browser inconsistencies and tries to level them.

For the sake of robustness, use well-tested, rock-solid libraries. The time, resources and brain power that went into the creation and maintenance of such libraries do not compare to your own solutions.

Quoted content by Mathias Schäfer is licensed under CC BY-SA. See the other snippets from Robust Client-Side JavaScript.

Strict Mode - Robust Client-Side JavaScript

ECMAScript 5 (2009) started to deprecate error-prone programming practices. But it could not just change code semantics from one day to the next. This would have broken most existing code.

In order to maintain backwards compatibility, ECMAScript 5 introduces the Strict Mode as an opt-in feature. In Strict Mode, common pitfalls are removed from the language or throw visible exceptions. Previously, several programming mistakes and bogus code were ignored silently. The Strict Mode turns these mistakes into visible errors – see failing fast.

Enable the Strict Mode by placing a marker at the beginning of a script:

Code language: JavaScript

'use strict';
window.alert('This code is evaluated in Strict Mode! Be careful!');

Or at the beginning of a function:

Code language: JavaScript

function strictFunction() {
  'use strict';
  window.alert('This function is evaluated in Strict Mode! Be careful!');
}

Syntax-wise, 'use strict'; is simply an expression statement with a string literal. This code does not do anything when evaluated. It is a meaningful marker for browsers that support ECMAScript 5, and innocuous code for browsers that do not.

Quoted content by Mathias Schäfer is licensed under CC BY-SA. See the other snippets from Robust Client-Side JavaScript.

Duck typing - Robust Client-Side JavaScript

As a weakly typed language, JavaScript performs implicit type conversion so developers do not need to think much about types. The concept behind this is called duck typing: “If it walks like a duck and quacks like a duck, it is a duck.”

typeof and instanceof check what a value is and where it comes from. As we have seen, both operators have serious limitations.

In contrast, duck typing checks what a value does and provides. After all, you are not interested in the type of a value, you are interested in what you can do with the value.

[…]

Duck typing would ask instead: What does the function do with the value? Then check whether the value fulfills the needs, and be done with it.

[…]

This check is not as strict as instanceof, and that is an advantage. A function that does not assert types but object capabilities is more flexible.

For example, JavaScript has several types that do not inherit from Array.prototype but walk and talk like arrays: Arguments, HTMLCollection and NodeList. A function that uses duck typing is able to support all array-like types.

Quoted content by Mathias Schäfer is licensed under CC BY-SA. See the other snippets from Robust Client-Side JavaScript.

Conditional statements and truthy values - Robust Client-Side JavaScript

The key to robust JavaScript is asking “if” a lot. During the concept phase, ask “what if”. In the code, ask if to handle different cases differently.

The if statement, or conditional statement, consists of a condition, a code block and an optional second code block.

Code language: JavaScript

if (condition) {
  // …
} else {
  // …
}

When an if statement is evaluated, first the condition expression is evaluated. The result of the expression is then converted into a boolean value, true or false. If this result is true, the first code block is executed, otherwise the second block, if given.

Most likely, this is not new to you. The reason we are revisiting it is the conversion into boolean. It means you can use a condition expression that does not necessarily evaluate to a boolean value. Other types, like Undefined, Null, String or Object are possible. For example, it is possible to write if ("Hello!") {…}.

If you rely on the implicit conversion, you should learn the conversion rules. ECMAScript defines an internal function ToBoolean for this purpose. In our code, we can use the public Boolean() function to convert a value into boolean. This delegates to the internal ToBoolean function.

To illustrate the conversion, imagine that

Code language: JavaScript

if (condition) {
  // …
} else {
  // …
}

is a short version of

Code language: JavaScript

if (Boolean(condition) === true) {
  // …
} else {
  // …
}

Values are called truthy when ToBoolean converts them into true. Values are called falsy when ToBoolean converts them into false.

The way ToBoolean works is simple, but with a twist. Let us quote the ECMAScript specification which is quite readable for once:

ToBoolean Conversions
Argument Type Result
Undefined Return false.
Null Return false.
Boolean Return argument.
Number If argument is +0, -0, or NaN, return false; otherwise return true.
String If argument is the empty String (its length is zero), return false; otherwise return true.
Symbol Return true.
Object Return true.

As you can see, most types have a clear boolean counterpart. All objects, including functions, dates, regular expressions and errors, are truthy. The two types denoting emptiness, undefined and null, are falsy.

For numbers and strings though, it is complicated. Numbers are truthy except for zeros and NaN. Strings are truthy except for empty strings.

Quoted content by Mathias Schäfer is licensed under CC BY-SA. See the other snippets from Robust Client-Side JavaScript.