Snippets

Front-End Performance Checklist 2017

This is an incredibly exhaustive list of performance tweaks, improvements, and best practices. Some may be outside the scope of smaller websites, but there are plenty of things for everyone.

Back in the day, performance was often a mere afterthought. Often deferred till the very end of the project, it would boil down to minification, concatenation, asset optimization and potentially a few fine adjustments on the server’s config file. Looking back now, things seem to have changed quite significantly.

Performance isn’t just a technical concern: It matters, and when baking it into the workflow, design decisions have to be informed by their performance implications. Performance has to be measured, monitored and refined continually, and the growing complexity of the web poses new challenges that make it hard to keep track of metrics, because metrics will vary significantly depending on the device, browser, protocol, network type and latency (CDNs, ISPs, caches, proxies, firewalls, load balancers and servers all play a role in performance).

So, if we created an overview of all the things we have to keep in mind when improving performance — from the very start of the process until the final release of the website — what would that list look like? Below you’ll find a (hopefully unbiased and objective) front-end performance checklist for 2017 — an overview of the issues you might need to consider to ensure that your response times are fast and your website smooth.

Don't go single-page-app too soon, or how GitHub reimplementing navigation in JavaScript loses streaming capability

A few weeks ago I was at Heathrow airport getting a bit of work done before a flight, and I noticed something odd about the performance of GitHub: It was quicker to open links in a new window than simply click them.

[…]

When you load a page, the browser takes a network stream and pipes it to the HTML parser, and the HTML parser is piped to the document. This means the page can render progressively as it’s downloading. The page may be 100k, but it can render useful content after only 20k is received.

This is a great, ancient browser feature, but as developers we often engineer it away. Most load-time performance advice boils down to “show them what you got” - don’t hold back, don’t wait until you have everything before showing the user anything.

GitHub cares about performance so they server-render their pages. However, when navigating within the same tab navigation is entirely reimplemented using JavaScript. Something like…

Code language: JavaScript

// …lots of code to reimplement browser navigation…
const response = await fetch('page-data.inc');
const html = await response.text();
document.querySelector('.content').innerHTML = html;
// …loads more code to reimplement browser navigation…

This breaks the rule, as all of page-data.inc is downloaded before anything is done with it. The server-rendered version doesn’t hoard content this way, it streams, making it faster. For GitHub’s client-side render, a lot of JavaScript was written to make this slow.

I’m just using GitHub as an example here - this anti-pattern is used by almost every single-page-app.

Switching content in the page can have some benefits, especially if you have some heavy scripts, as you can update content without re-evaluating all that JS. But can we do that without losing streaming?

[…]

Newline-delimited JSON

A lot of sites deliver their dynamic updates as JSON. Unfortunately JSON isn’t a streaming-friendly format. There are streaming JSON parsers out there, but they aren’t easy to use.

So instead of delivering a chunk of JSON:

Code language: JavaScript

{
  "Comments": [
    {"author": "Alex", "body": "…"},
    {"author": "Jake", "body": "…"}
  ]
}

…deliver each JSON object on a new line:

Code language: JavaScript

{"author": "Alex", "body": "…"}
{"author": "Jake", "body": "…"}

This is called “newline-delimited JSON” and there’s a sort-of standard for it. Writing a parser for the above is much simpler. In 2017 we’ll be able to express this as a series of composable transform streams:

Code language: JavaScript

const response = await fetch('comments.ndjson');
const comments = response.body
  // From bytes to text:
  .pipeThrough(new TextDecoder())
  // Buffer until newlines:
  .pipeThrough(splitStream('\n'))
  // Parse chunks as JSON:
  .pipeThrough(parseJSON());
 
for await (const comment of comments) {
  // Process each comment and add it to the page:
  // (via whatever template or VDOM you're using)
  addCommentToPage(comment);
}

…where splitStream and parseJSON are reusable transform streams. But in the meantime, for maximum browser compatibility we can hack it on top of XHR.

Again, I’ve built a little demo where you can compare the two, here are the 3g results:

Versus normal JSON, ND-JSON gets content on screen 1.5 seconds sooner, although it isn’t quite as fast as the iframe solution. It has to wait for a complete JSON object before it can create elements, you may run into a lack-of-streaming if your JSON objects are huge.

Don’t go single-page-app too soon

As I mentioned above, GitHub wrote a lot of code to create this performance problem. Reimplementing navigations on the client is hard, and if you’re changing large parts of the page it might not be worth it.

[…]

[A] simple no-JavaScript browser navigation to a server rendered page is roughly as fast. The test page is really simple aside from the comments list, your mileage may vary if you have a lot of complex content repeated between pages (basically, I mean horrible ad scripts), but always test! You might be writing a lot of code for very little benefit, or even making it slower.

An Intro to Monkey Testing with Gremlins.js

A common idiom in our industry is, “You never can predict how the user will use your product once they get it in their hands.” If you’ve ever watched a stakeholder use a website or web application for the first time, you may know this firsthand. I can’t count the number of times I’ve seen a user seemingly forget how to use websites on a mobile device, or try to use it in a way that makes you think, “But no one would actually do that in real life!”

The thing is, you never really do know what a user may do in the moment. They might be in a frantic state of mind, trying to accomplish something too quickly, and don’t tap or type the way a calm, focused user might. This fits right into the all-too-common scenario of designing and developing for the best case scenario, and not thinking about edge cases or “what happens if things don’t happen perfectly in this order?” As developers, we tend to build things thinking that everything will be understood, that the user is rational and should just know that tapping around too quickly might cause something weird to happen. It can even affect those who might make accidental taps or clicks when not giving an app full attention - how many times have you accidentally tapped on a mobile device when you were walking and talking while also trying to reply to a tweet or email.

Building out tools to help us test the unpredictable aren’t entirely new. In 2012, Netflix had open-sourced their internal service Chaos Monkey, which “terminates virtual machine instances and containers that run inside of your production environment.” In plain language, it’s a service that tears down servers at random to ensure an entire system doesn’t violently collapse during a failure. Our development communities also remind us to not just design for “the happy path”, but how can we actually detect for unpredicted points of failure in our interfaces the way we can with our server architectures?

If a hundred monkeys at typewriters can write the works of Shakespeare, then one monkey should surely be able to find bugs and problems in our interfaces.

Bring in the monkeys

Monkey testing is a method of testing that generates random user input - clicks, swipes, entering input - with the sole purpose of finding issues with, or entirely breaking, your application. Unlike unit and acceptance testing, where you are writing test cases that occur in a specific order or set of conditions, which ultimately creates bias in how your interface is tested. Developers have less control over how a monkey test will execute, and since they are random every time they are run, you’ll never be testing for just one scenario, but rather an infinite combination of interactions.

Although this type of testing is available for most technology stacks, things built for the web haven’t necessarily got there yet. For example, the Android SDK has a UI Exerciser Monkey that handles most interface-level and system-level events. As web developers have begun to think more critically about performance and stress testing, some of these ideas have finally made it over to the world of the web in the form of Gremlins.js, a JavaScript-based monkey testing library written by the team at Marmelab.

How Medium does progressive image loading

Recently, I was browsing a post on Medium and I spotted a nice image loading effect. First, load a small blurry image, and then transition to the large image. I found it pretty neat and wanted to dissect how it was done.

[…]

I have performed a WebPageTest test against this page on Medium where you can see how it loads too. And if you want to see it by yourself, open Medium’s post in your browser, disable the cache and throttle the response so it takes longer to fetch the images and you can see the effect.

Here is what is going on:

  1. Render a div where the image will be displayed. Medium uses a <div/> with a padding-bottom set to a percentage, which corresponds to the aspect ratio of the image. Thus, they prevent reflows while the images are loaded since everything is rendered in its final position. This has also been referred to as intrinsic placeholders.

  2. Load a tiny version of the image. At the moment, they seem to be requesting small JPEG thumbnails with a very low quality (e.g. 20%). The markup for this small image is returned in the initial HTML as an <img/>, so the browser starts fetching them right away.

  3. Once the image is loaded, it is drawn in a <canvas/>. Then, the image data is taken and passed through a custom blur() function You can see it, a bit scrambled, in the main-base.bundle JS file. This function is similar, though not identical, to StackBlur‘s blur function. At the same time, the main image is requested.

  4. Once the main image is loaded, it is shown and the canvas is hidden.

All the transitions are quite smooth, thanks to the CSS animations applied.

Cars with Broken Windshield Wipers

I was stopped at an intersection the other day. It was raining. The road on the other side sloped upwards, so I could see the stopped cars on the other side of the road kind of stadium-seating style. I could see all their windshield wipers going all at the same time, all out-of-sync with each other. Plus a few of them had seemingly kinda broken ones that flapped at awkward times and angles.

What does that have to do with web design and development? Nothing really, other than that I took the scene as inspiration to create something, and it ended up being an interesting hodgepodge of “tricks”.