Performance

Measure Performance with the RAIL Model

RAIL is a user-centric performance model that breaks down the user’s experience into key actions. RAIL’s goals and guidelines aim to help developers and designers ensure a good user experience for each of these actions. By laying out a structure for thinking about performance, RAIL enables designers and developers to reliably target the work that has the highest impact on user experience.

Every web app has four distinct aspects to its life cycle, and performance fits into them in different ways:

Summary

RAIL is a lens for looking at a website’s user experience as a journey composed of distinct interactions. Understand how users perceive your site in order to set performance goals with the greatest impact on user experience.

  • Focus on the user.
  • Respond to user input in under 100ms.
  • Produce a frame in under 10ms when animating or scrolling.
  • Maximize main thread idle time.
  • Load interactive content in under 5000ms.

Document.createDocumentFragment()

Creates a new empty DocumentFragment into which DOM nodes can be added to build an offscreen DOM tree.

[…]

DocumentFragments are DOM Node objects which are never part of the main DOM tree. The usual use case is to create the document fragment, append elements to the document fragment and then append the document fragment to the DOM tree. In the DOM tree, the document fragment is replaced by all its children.

Since the document fragment is in memory and not part of the main DOM tree, appending children to it does not cause page reflow (computation of element’s position and geometry). Historically, using document fragments could result in better performance.

Code language: JavaScript

var element  = document.getElementById('ul'); // assuming ul exists
var fragment = document.createDocumentFragment();
var browsers = ['Firefox', 'Chrome', 'Opera', 
    'Safari', 'Internet Explorer'];
 
browsers.forEach(function(browser) {
    var li = document.createElement('li');
    li.textContent = browser;
    fragment.appendChild(li);
});
 
element.appendChild(fragment);

Using requestIdleCallback

Many sites and apps have a lot of scripts to execute. Your JavaScript often needs to be run as soon as possible, but at the same time you don’t want it to get in the user’s way. If you send analytics data when the user is scrolling the page, or you append elements to the DOM while they happen to be tapping on the button, your web app can become unresponsive, resulting in a poor user experience.

[…]

The good news is that there’s now an API that can help: requestIdleCallback. In the same way that adopting requestAnimationFrame allowed us to schedule animations properly and maximize our chances of hitting 60fps, requestIdleCallback will schedule work when there is free time at the end of a frame, or when the user is inactive. This means that there’s an opportunity to do your work without getting in the user’s way.

[…]

Why should I use requestIdleCallback?

Scheduling non-essential work yourself is very difficult to do. It’s impossible to figure out exactly how much frame time remains because after requestAnimationFrame callbacks execute there are style calculations, layout, paint, and other browser internals that need to run. A home-rolled solution can’t account for any of those. In order to be sure that a user isn’t interacting in some way you would also need to attach listeners to every kind of interaction event (scroll, touch, click), even if you don’t need them for functionality, just so that you can be absolutely sure that the user isn’t interacting. The browser, on the other hand, knows exactly how much time is available at the end of the frame, and if the user is interacting, and so through requestIdleCallback we gain an API that allows us to make use of any spare time in the most efficient way possible.

The Fallacies of Distributed Computing (Applied to Front-End Performance)

In the mid-nineties, Laurence Peter Deutsch and colleagues at Sun Microsystems devised a list of what they called The Fallacies of Distributed Computing. These were a list of common assumptions that developers working on distributed systems were prone to making; mistakes that would impact the reliability, security, or resilience of their software. Those fallacies are as follows:

  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn’t change.
  6. There is one administrator.
  7. Transport cost is zero.
  8. The network is homogeneous.

Reading over the eight fallacies listed out so plainly, they seem so obvious and clear that you’d struggle to believe that anyone would ever fall foul of them: of course we know bandwidth isn’t infinite! The thing is, these fallacies are obvious, but they don’t exist to teach us anything new; they exist to remind us of the fundamentals. Nor are they intended to explain or describe normal condition; they’re intended to remind us of worst case scenarios. They’re not saying that the network is always unreliable, or that latency is always high, or that bandwidth is always low: they’re saying that, sometimes, one or all of them will be sub-optimal. We should prepare for that.

Yet time and time again I see developers falling into the same old traps—making assumptions or overly-optimistic predictions about the conditions in which their apps will run. Developers frequently tell me things like most of our users are on wifi, or 4G is pretty much everywhere now, or people only ever visit the site from inside the office anyway. Even if this is statistically true—even if your analytics corroborate the claim—planning only for the best leaves you utterly unprepared for the worst. To paraphrase Jeremy, it’s not about how well it works, but how well it fails.

What if images don't arrive? A tale of a badly designed lazy loader

If you’re looking for an example of exactly what not to do in terms of front-end performance, I can’t think of a better one than this - they threw away a lot of the performance optimizations browsers give us for free in a bizarre attempt at improving page loading, which ended up doing the opposite:

I was recently conducting some exploratory work for a potential client when I hit upon a pretty severe flaw in a design decision they’d made: They’d built a responsive image lazyloader in JavaScript which, by design, worked by:

  1. immediately applying display: none; to the <body>;
  2. waiting until the very last of the page’s images had arrived;
  3. once they’d arrived, removing the display: none; and gradually fading the page into visibility.

Not only does this strike me as an unusual design decision—setting out to build a lazyloader and then having it intentionally block rendering—there had been no defensive strategy to answer the question: what if something goes wrong with image delivery?

‘Something wrong’ is exactly what happened. Due to an imperfect combination of:

  1. images being completely unoptimised, plus;
  2. a misconfiguration with their image transformation service leading to double downloads for all images;

…they’d managed to place 27.9MB of images onto the Critical Path. Almost 30MB of previously non-render blocking assets had just been turned into blocking ones on purpose with no escape hatch. Start render time was as high as 27.1s over a cable connection1.

If you’re going to build an image loader that hides the whole page until all images are ready, you must also ask yourself what if the images don’t arrive?

FLIP.js

FLIP is an approach to animations that remaps animating expensive properties, like width, height, left and top to significantly cheaper changes using transforms. It does this by taking two snapshots, one of the element’s First position (F), another of its Last position (L). It then uses a transform to Invert (I) the element’s changes, such that the element appears to still be in the First position. Lastly it Plays (P) the animation forward by removing the transformations applied in the Invert step.

Taming huge collections of DOM nodes

Hajime Yamasaki Vukelic has come to the conclusion that, if you’re dealing with a really big number of DOM nodes that need to be updated in real time, frameworks are usually incredibly slow on top of the already-slow DOM operations you have to do. The solution they settled on is to avoid frameworks altogether for these scenarios and use vanilla JavaScript. Also of note are that repaints and reflows are going to be big bottlenecks regardless of what you use.

  • If you are looking for performance, don’t use frameworks. Period.
  • At the end of the day, DOM is slow.
  • Repaints and reflows are even slower.
  • Whatever performance you get out of your app, repaints and reflows are still going to be the last remaining bottleneck.
  • Keep the number of DOM nodes down.
  • Cache created DOM nodes, and use them as a pool of pre-assembled elements you can put back in the page as needed.
  • Logging the timings in IE/Edge console is unreliable because the developer tools have a noticeable performance hit.
  • Measure! Always measure performance first, then only fix the issues you’ve reliably identified.